Oct 14 14:48:57 crc systemd[1]: Starting Kubernetes Kubelet... Oct 14 14:48:57 crc restorecon[4584]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 14:48:57 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 14:48:58 crc restorecon[4584]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 14 14:48:58 crc kubenswrapper[4860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 14:48:58 crc kubenswrapper[4860]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 14 14:48:58 crc kubenswrapper[4860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 14:48:58 crc kubenswrapper[4860]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 14:48:58 crc kubenswrapper[4860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 14 14:48:58 crc kubenswrapper[4860]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.843580 4860 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846428 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846446 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846451 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846455 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846459 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846463 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846467 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846471 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846474 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846479 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846483 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846488 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846492 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846496 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846499 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846502 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846506 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846509 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846513 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846517 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846520 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846524 4860 feature_gate.go:330] unrecognized feature gate: Example Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846527 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846530 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846534 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846539 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846545 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846549 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846553 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846557 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846561 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846565 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846568 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846572 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846576 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846580 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846585 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846590 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846594 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846599 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846603 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846609 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846615 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846622 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846627 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846634 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846638 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846643 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846648 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846651 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846655 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846658 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846662 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846665 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846669 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846673 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846676 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846680 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846683 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846687 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846691 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846694 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846697 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846701 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846704 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846707 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846713 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846717 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846721 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846725 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.846729 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847556 4860 flags.go:64] FLAG: --address="0.0.0.0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847572 4860 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847583 4860 flags.go:64] FLAG: --anonymous-auth="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847591 4860 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847598 4860 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847604 4860 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847612 4860 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847619 4860 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847625 4860 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847630 4860 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847636 4860 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847642 4860 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847647 4860 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847653 4860 flags.go:64] FLAG: --cgroup-root="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847658 4860 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847663 4860 flags.go:64] FLAG: --client-ca-file="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847668 4860 flags.go:64] FLAG: --cloud-config="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847673 4860 flags.go:64] FLAG: --cloud-provider="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847679 4860 flags.go:64] FLAG: --cluster-dns="[]" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847685 4860 flags.go:64] FLAG: --cluster-domain="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847690 4860 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847696 4860 flags.go:64] FLAG: --config-dir="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847701 4860 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847706 4860 flags.go:64] FLAG: --container-log-max-files="5" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847713 4860 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847720 4860 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847725 4860 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847730 4860 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847734 4860 flags.go:64] FLAG: --contention-profiling="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847738 4860 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847742 4860 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847746 4860 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847750 4860 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847756 4860 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847760 4860 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847764 4860 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847768 4860 flags.go:64] FLAG: --enable-load-reader="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847772 4860 flags.go:64] FLAG: --enable-server="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847776 4860 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847781 4860 flags.go:64] FLAG: --event-burst="100" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847786 4860 flags.go:64] FLAG: --event-qps="50" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847789 4860 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847793 4860 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847798 4860 flags.go:64] FLAG: --eviction-hard="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847803 4860 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847808 4860 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847813 4860 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847817 4860 flags.go:64] FLAG: --eviction-soft="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847821 4860 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847825 4860 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847829 4860 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847833 4860 flags.go:64] FLAG: --experimental-mounter-path="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847837 4860 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847841 4860 flags.go:64] FLAG: --fail-swap-on="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847844 4860 flags.go:64] FLAG: --feature-gates="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847850 4860 flags.go:64] FLAG: --file-check-frequency="20s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847854 4860 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847858 4860 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847863 4860 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847868 4860 flags.go:64] FLAG: --healthz-port="10248" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847873 4860 flags.go:64] FLAG: --help="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847879 4860 flags.go:64] FLAG: --hostname-override="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847884 4860 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847889 4860 flags.go:64] FLAG: --http-check-frequency="20s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847895 4860 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847900 4860 flags.go:64] FLAG: --image-credential-provider-config="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847905 4860 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847911 4860 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847915 4860 flags.go:64] FLAG: --image-service-endpoint="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847920 4860 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847925 4860 flags.go:64] FLAG: --kube-api-burst="100" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847930 4860 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847935 4860 flags.go:64] FLAG: --kube-api-qps="50" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847940 4860 flags.go:64] FLAG: --kube-reserved="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847945 4860 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847950 4860 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847955 4860 flags.go:64] FLAG: --kubelet-cgroups="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847961 4860 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847966 4860 flags.go:64] FLAG: --lock-file="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847971 4860 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847976 4860 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847981 4860 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847990 4860 flags.go:64] FLAG: --log-json-split-stream="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.847995 4860 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848001 4860 flags.go:64] FLAG: --log-text-split-stream="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848006 4860 flags.go:64] FLAG: --logging-format="text" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848011 4860 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848017 4860 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848023 4860 flags.go:64] FLAG: --manifest-url="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848044 4860 flags.go:64] FLAG: --manifest-url-header="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848051 4860 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848057 4860 flags.go:64] FLAG: --max-open-files="1000000" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848064 4860 flags.go:64] FLAG: --max-pods="110" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848069 4860 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848074 4860 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848078 4860 flags.go:64] FLAG: --memory-manager-policy="None" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848084 4860 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848118 4860 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848124 4860 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848128 4860 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848138 4860 flags.go:64] FLAG: --node-status-max-images="50" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848142 4860 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848147 4860 flags.go:64] FLAG: --oom-score-adj="-999" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848151 4860 flags.go:64] FLAG: --pod-cidr="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848154 4860 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848161 4860 flags.go:64] FLAG: --pod-manifest-path="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848165 4860 flags.go:64] FLAG: --pod-max-pids="-1" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848170 4860 flags.go:64] FLAG: --pods-per-core="0" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848174 4860 flags.go:64] FLAG: --port="10250" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848178 4860 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848182 4860 flags.go:64] FLAG: --provider-id="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848186 4860 flags.go:64] FLAG: --qos-reserved="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848190 4860 flags.go:64] FLAG: --read-only-port="10255" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848195 4860 flags.go:64] FLAG: --register-node="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848198 4860 flags.go:64] FLAG: --register-schedulable="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848202 4860 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848209 4860 flags.go:64] FLAG: --registry-burst="10" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848213 4860 flags.go:64] FLAG: --registry-qps="5" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848217 4860 flags.go:64] FLAG: --reserved-cpus="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848221 4860 flags.go:64] FLAG: --reserved-memory="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848227 4860 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848231 4860 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848235 4860 flags.go:64] FLAG: --rotate-certificates="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848239 4860 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848243 4860 flags.go:64] FLAG: --runonce="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848247 4860 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848251 4860 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848256 4860 flags.go:64] FLAG: --seccomp-default="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848260 4860 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848269 4860 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848273 4860 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848277 4860 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848282 4860 flags.go:64] FLAG: --storage-driver-password="root" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848286 4860 flags.go:64] FLAG: --storage-driver-secure="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848290 4860 flags.go:64] FLAG: --storage-driver-table="stats" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848294 4860 flags.go:64] FLAG: --storage-driver-user="root" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848298 4860 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848302 4860 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848306 4860 flags.go:64] FLAG: --system-cgroups="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848310 4860 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848316 4860 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848320 4860 flags.go:64] FLAG: --tls-cert-file="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848324 4860 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848329 4860 flags.go:64] FLAG: --tls-min-version="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848333 4860 flags.go:64] FLAG: --tls-private-key-file="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848337 4860 flags.go:64] FLAG: --topology-manager-policy="none" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848342 4860 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848346 4860 flags.go:64] FLAG: --topology-manager-scope="container" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848350 4860 flags.go:64] FLAG: --v="2" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848355 4860 flags.go:64] FLAG: --version="false" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848361 4860 flags.go:64] FLAG: --vmodule="" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848366 4860 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848371 4860 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848462 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848466 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848470 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848474 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848478 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848481 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848486 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848490 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848495 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848499 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848502 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848506 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848512 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848515 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848519 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848524 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848528 4860 feature_gate.go:330] unrecognized feature gate: Example Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848532 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848536 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848539 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848543 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848547 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848551 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848554 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848558 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848561 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848564 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848568 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848572 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848575 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848579 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848582 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848586 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848589 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848593 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848602 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848606 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848627 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848630 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848634 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848639 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848642 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848646 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848651 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848655 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848659 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848662 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848666 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848669 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848673 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848677 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848680 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848686 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848690 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848694 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848697 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848700 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848704 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848707 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848711 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848714 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848718 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848721 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848725 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848728 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848732 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848737 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848740 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848744 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848748 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.848753 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.848766 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.858771 4860 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.858801 4860 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858872 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858880 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858886 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858891 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858896 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858901 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858905 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858910 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858914 4860 feature_gate.go:330] unrecognized feature gate: Example Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858919 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858924 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858929 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858934 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858939 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858945 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858953 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858958 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858963 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858967 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858971 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858974 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858978 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858983 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858992 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.858998 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859016 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859020 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859038 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859043 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859047 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859051 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859055 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859059 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859062 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859066 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859069 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859073 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859076 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859081 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859086 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859091 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859096 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859101 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859105 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859109 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859112 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859116 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859119 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859123 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859127 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859130 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859133 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859137 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859141 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859144 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859147 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859151 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859154 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859158 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859162 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859166 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859170 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859175 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859179 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859184 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859188 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859193 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859197 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859202 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859206 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859211 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.859219 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859357 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859364 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859370 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859374 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859378 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859381 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859385 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859388 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859393 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859397 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859400 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859404 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859407 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859411 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859414 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859418 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859421 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859425 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859428 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859432 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859435 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859439 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859442 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859446 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859450 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859453 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859457 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859460 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859464 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859467 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859471 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859474 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859478 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859481 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859485 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859488 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859492 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859496 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859500 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859505 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859509 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859514 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859518 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859522 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859526 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859530 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859533 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859537 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859540 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859544 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859547 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859550 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859554 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859557 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859561 4860 feature_gate.go:330] unrecognized feature gate: Example Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859565 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859569 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859573 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859578 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859582 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859586 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859591 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859596 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859602 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859608 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859613 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859618 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859622 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859627 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859631 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.859636 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.859643 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.859813 4860 server.go:940] "Client rotation is on, will bootstrap in background" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.867045 4860 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.867144 4860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.868999 4860 server.go:997] "Starting client certificate rotation" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.869039 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.869189 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-18 22:08:42.174698796 +0000 UTC Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.869269 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1567h19m43.305431891s for next certificate rotation Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.907870 4860 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.912690 4860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.932092 4860 log.go:25] "Validated CRI v1 runtime API" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.962726 4860 log.go:25] "Validated CRI v1 image API" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.964460 4860 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.969493 4860 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-14-14-41-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.969524 4860 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.981446 4860 manager.go:217] Machine: {Timestamp:2025-10-14 14:48:58.979201293 +0000 UTC m=+0.565984762 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f3673689-c436-4678-b4d3-79881aec5944 BootID:e6ed96bb-defa-436f-8418-5c94eee7820a Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4a:1e:7b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4a:1e:7b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:35:5e:7b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:52:7a:05 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d6:bd:22 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a6:c8:22 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:ec:65:08:5b:2a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:2a:81:5b:0c:b2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.981691 4860 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.981806 4860 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.983139 4860 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.983354 4860 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.983388 4860 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.983620 4860 topology_manager.go:138] "Creating topology manager with none policy" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.983631 4860 container_manager_linux.go:303] "Creating device plugin manager" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.984220 4860 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.984249 4860 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.984857 4860 state_mem.go:36] "Initialized new in-memory state store" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.984933 4860 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.988211 4860 kubelet.go:418] "Attempting to sync node with API server" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.988233 4860 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.988255 4860 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.988267 4860 kubelet.go:324] "Adding apiserver pod source" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.988281 4860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.995552 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:48:58 crc kubenswrapper[4860]: E1014 14:48:58.995626 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:48:58 crc kubenswrapper[4860]: W1014 14:48:58.995725 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:48:58 crc kubenswrapper[4860]: E1014 14:48:58.995758 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.996865 4860 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.997882 4860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 14 14:48:58 crc kubenswrapper[4860]: I1014 14:48:58.999130 4860 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001071 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001097 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001105 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001112 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001124 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001130 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001137 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001147 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001155 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001162 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001172 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001179 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.001933 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.002328 4860 server.go:1280] "Started kubelet" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.002464 4860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.003116 4860 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.003515 4860 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 14 14:48:59 crc systemd[1]: Started Kubernetes Kubelet. Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.004729 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.004757 4860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.005010 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:28:37.825070204 +0000 UTC Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.005065 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 747h39m38.820008207s for next certificate rotation Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.005095 4860 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.005108 4860 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.005206 4860 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.005229 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.005295 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.005592 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.005627 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.006281 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.006427 4860 factory.go:55] Registering systemd factory Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.006441 4860 factory.go:221] Registration of the systemd container factory successfully Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.008799 4860 server.go:460] "Adding debug handlers to kubelet server" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.009020 4860 factory.go:153] Registering CRI-O factory Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.009096 4860 factory.go:221] Registration of the crio container factory successfully Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.010451 4860 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.010481 4860 factory.go:103] Registering Raw factory Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.010503 4860 manager.go:1196] Started watching for new ooms in manager Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.011272 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e62fc0ae1f568 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-14 14:48:59.002303848 +0000 UTC m=+0.589087287,LastTimestamp:2025-10-14 14:48:59.002303848 +0000 UTC m=+0.589087287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.013157 4860 manager.go:319] Starting recovery of all containers Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.013777 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.013848 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.013868 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.013886 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.013900 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014011 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014024 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014064 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014084 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014100 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014113 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014127 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014144 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014159 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014175 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014187 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014205 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.014219 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.015125 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.015184 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.015212 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016637 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016657 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016676 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016689 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016708 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016729 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016751 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016766 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016788 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016801 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016822 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016835 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.016849 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.017318 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.017339 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.017357 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.017370 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.017387 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.021592 4860 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.021708 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.021733 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.021749 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.021768 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.021782 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026258 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026288 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026301 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026312 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026323 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026334 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026345 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026354 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026372 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026385 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026397 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026408 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026419 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026428 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026437 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026446 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026454 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026464 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026474 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026483 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026492 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026503 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026512 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026523 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026533 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026542 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026552 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026561 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026593 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026617 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026627 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026636 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026646 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026656 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026666 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026676 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026686 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026695 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026705 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026716 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026725 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026734 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026770 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026784 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026795 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026807 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026820 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026833 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026846 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026858 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026871 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026883 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026894 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026905 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026917 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026929 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026939 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026952 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026964 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.026978 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027019 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027065 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027080 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027096 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027109 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027126 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027139 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027154 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027168 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.027181 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028014 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028081 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028094 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028110 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028120 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028130 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028141 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028151 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028162 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028173 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028219 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028236 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028248 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028259 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028270 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028282 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028291 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028302 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028313 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028331 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028351 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028364 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028375 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028386 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028396 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028407 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028417 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028428 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028438 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028448 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028459 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028470 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028480 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028492 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028504 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028515 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028526 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028536 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028545 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028554 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028564 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028574 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028583 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028593 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028602 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028611 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028622 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028632 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028641 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028652 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028665 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028673 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028682 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028691 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028702 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028716 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028741 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028753 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028763 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028775 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028785 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028794 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028804 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028814 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028823 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028833 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028845 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028854 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028864 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028873 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028883 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028893 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028903 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028914 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028925 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028934 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028945 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028957 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028968 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028981 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.028991 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029001 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029014 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029048 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029059 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029071 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029085 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029104 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029114 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029126 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029136 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029147 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029157 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029167 4860 reconstruct.go:97] "Volume reconstruction finished" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.029178 4860 reconciler.go:26] "Reconciler: start to sync state" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.035092 4860 manager.go:324] Recovery completed Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.044946 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.046978 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.047550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.047570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.049504 4860 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.049522 4860 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.049542 4860 state_mem.go:36] "Initialized new in-memory state store" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.058647 4860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.060252 4860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.060307 4860 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.060335 4860 kubelet.go:2335] "Starting kubelet main sync loop" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.060435 4860 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.061117 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.061182 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.068915 4860 policy_none.go:49] "None policy: Start" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.070957 4860 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.070997 4860 state_mem.go:35] "Initializing new in-memory state store" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.105774 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.119993 4860 manager.go:334] "Starting Device Plugin manager" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120071 4860 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120083 4860 server.go:79] "Starting device plugin registration server" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120537 4860 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120552 4860 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120702 4860 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120818 4860 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.120858 4860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.127396 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.161509 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.161598 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.162926 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.162957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.162968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163089 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163270 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163322 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163755 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163858 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.163958 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164004 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164081 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164800 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164830 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.164747 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165862 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165933 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.165965 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.167342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.167392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.167405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.169173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.169194 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.169220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.169328 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.169349 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.169971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.170004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.170016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.206709 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.220732 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.221605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.221634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.221643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.221659 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.222051 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234622 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234656 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234677 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234692 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234720 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234762 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234780 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234819 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234858 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234891 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234907 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234924 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234950 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.234968 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336398 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336439 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336463 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336486 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336509 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336530 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336549 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336569 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336574 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336587 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336605 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336611 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336650 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336654 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336653 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336681 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336688 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336707 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336741 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336746 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336756 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336751 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336798 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336763 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336777 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336684 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.336715 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.422766 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.423730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.423777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.423786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.423805 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.424128 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.502006 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.521253 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.527083 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.544511 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.547182 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c0e0ee9bdf32d98db73975fb39554e5b33d223f6748df9551f234e056caf3855 WatchSource:0}: Error finding container c0e0ee9bdf32d98db73975fb39554e5b33d223f6748df9551f234e056caf3855: Status 404 returned error can't find the container with id c0e0ee9bdf32d98db73975fb39554e5b33d223f6748df9551f234e056caf3855 Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.548450 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.552152 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-606cad0b71a72acac6e9f601fe8ea1251f120c539864d709ea125dacc8c686de WatchSource:0}: Error finding container 606cad0b71a72acac6e9f601fe8ea1251f120c539864d709ea125dacc8c686de: Status 404 returned error can't find the container with id 606cad0b71a72acac6e9f601fe8ea1251f120c539864d709ea125dacc8c686de Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.563676 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e78ff081cff8a0fa23ed1540b35dc9815d0d171a3db1884d670936d58ea231a5 WatchSource:0}: Error finding container e78ff081cff8a0fa23ed1540b35dc9815d0d171a3db1884d670936d58ea231a5: Status 404 returned error can't find the container with id e78ff081cff8a0fa23ed1540b35dc9815d0d171a3db1884d670936d58ea231a5 Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.564737 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-88aec7599caf1ecf81dec302aa8c2e3bd98b721503b4c957f8cd08173ccb5b82 WatchSource:0}: Error finding container 88aec7599caf1ecf81dec302aa8c2e3bd98b721503b4c957f8cd08173ccb5b82: Status 404 returned error can't find the container with id 88aec7599caf1ecf81dec302aa8c2e3bd98b721503b4c957f8cd08173ccb5b82 Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.565918 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4c07f6efb61361f74719be6dbfaa663fa4809d95f6eb409b974aa1ca08f5640f WatchSource:0}: Error finding container 4c07f6efb61361f74719be6dbfaa663fa4809d95f6eb409b974aa1ca08f5640f: Status 404 returned error can't find the container with id 4c07f6efb61361f74719be6dbfaa663fa4809d95f6eb409b974aa1ca08f5640f Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.607361 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.824205 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.825469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.825518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.825535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:48:59 crc kubenswrapper[4860]: I1014 14:48:59.825560 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.826007 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Oct 14 14:48:59 crc kubenswrapper[4860]: W1014 14:48:59.907097 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:48:59 crc kubenswrapper[4860]: E1014 14:48:59.907177 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.006364 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.066414 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c0e0ee9bdf32d98db73975fb39554e5b33d223f6748df9551f234e056caf3855"} Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.067321 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c07f6efb61361f74719be6dbfaa663fa4809d95f6eb409b974aa1ca08f5640f"} Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.068588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88aec7599caf1ecf81dec302aa8c2e3bd98b721503b4c957f8cd08173ccb5b82"} Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.069447 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e78ff081cff8a0fa23ed1540b35dc9815d0d171a3db1884d670936d58ea231a5"} Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.070639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"606cad0b71a72acac6e9f601fe8ea1251f120c539864d709ea125dacc8c686de"} Oct 14 14:49:00 crc kubenswrapper[4860]: W1014 14:49:00.326168 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:00 crc kubenswrapper[4860]: E1014 14:49:00.326578 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:00 crc kubenswrapper[4860]: E1014 14:49:00.408669 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Oct 14 14:49:00 crc kubenswrapper[4860]: W1014 14:49:00.462923 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:00 crc kubenswrapper[4860]: E1014 14:49:00.463051 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:00 crc kubenswrapper[4860]: W1014 14:49:00.501878 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:00 crc kubenswrapper[4860]: E1014 14:49:00.501965 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.626536 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.627907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.627945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.627957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:00 crc kubenswrapper[4860]: I1014 14:49:00.627982 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:49:00 crc kubenswrapper[4860]: E1014 14:49:00.628430 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Oct 14 14:49:00 crc kubenswrapper[4860]: E1014 14:49:00.958320 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e62fc0ae1f568 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-14 14:48:59.002303848 +0000 UTC m=+0.589087287,LastTimestamp:2025-10-14 14:48:59.002303848 +0000 UTC m=+0.589087287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.006203 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.076205 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05" exitCode=0 Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.076290 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05"} Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.076345 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.077207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.077231 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.077239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.078323 4860 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520" exitCode=0 Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.078419 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.078473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520"} Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.079088 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.079138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.079162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.079172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.080039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.080064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.080073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.081501 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0" exitCode=0 Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.081558 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0"} Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.081613 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.082435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.082660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.082681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.087159 4860 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11" exitCode=0 Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.087195 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11"} Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.087241 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.088020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.088107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.088121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.090338 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2"} Oct 14 14:49:01 crc kubenswrapper[4860]: I1014 14:49:01.090372 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.006205 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:02 crc kubenswrapper[4860]: E1014 14:49:02.010066 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.096186 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20" exitCode=0 Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.096303 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.096349 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.097503 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.097550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.097568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.098640 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"14cf97a4526994bafc923e20f51157fe84ec6690b3bba1f2210a43105a2ce6a7"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.098775 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.099995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.100046 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.100058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.101694 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.101746 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.101764 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.102452 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.102484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.102497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.105870 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.105914 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.105931 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.105947 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.111431 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.111473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.111486 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945"} Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.111587 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.115209 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.115494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.115517 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.229497 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.230865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.230895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.230905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.230927 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:49:02 crc kubenswrapper[4860]: E1014 14:49:02.231502 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Oct 14 14:49:02 crc kubenswrapper[4860]: I1014 14:49:02.432747 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:02 crc kubenswrapper[4860]: W1014 14:49:02.443717 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:02 crc kubenswrapper[4860]: E1014 14:49:02.443851 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:02 crc kubenswrapper[4860]: W1014 14:49:02.943452 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:02 crc kubenswrapper[4860]: E1014 14:49:02.943562 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:02 crc kubenswrapper[4860]: W1014 14:49:02.984467 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:02 crc kubenswrapper[4860]: E1014 14:49:02.984889 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.006320 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.117007 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e"} Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.117359 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.118247 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.118280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.118293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119117 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c" exitCode=0 Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119222 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119346 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119389 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c"} Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119464 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119393 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.119420 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123245 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123278 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.123788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.125616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.125664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:03 crc kubenswrapper[4860]: I1014 14:49:03.125679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.128959 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a"} Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129050 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8"} Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129102 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4"} Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129116 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2"} Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129076 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129065 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129715 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.129787 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.130185 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.130218 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.130228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.132294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.132298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.132368 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.132380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.132340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.132434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:04 crc kubenswrapper[4860]: I1014 14:49:04.287799 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.137050 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.137640 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.137799 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578"} Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.138291 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.138336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.138348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.138401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.138419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.138430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.428450 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.428586 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.430147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.430205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.430219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.431676 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.432776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.432806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.432818 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.432845 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.432841 4860 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.432891 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 14:49:05 crc kubenswrapper[4860]: I1014 14:49:05.660013 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.139622 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.139672 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.141344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.141390 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.141403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.141613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.141669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.141683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:06 crc kubenswrapper[4860]: I1014 14:49:06.242909 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:07 crc kubenswrapper[4860]: I1014 14:49:07.142405 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:07 crc kubenswrapper[4860]: I1014 14:49:07.143363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:07 crc kubenswrapper[4860]: I1014 14:49:07.143423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:07 crc kubenswrapper[4860]: I1014 14:49:07.143437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:08 crc kubenswrapper[4860]: I1014 14:49:08.510875 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:08 crc kubenswrapper[4860]: I1014 14:49:08.511369 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:08 crc kubenswrapper[4860]: I1014 14:49:08.512913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:08 crc kubenswrapper[4860]: I1014 14:49:08.512958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:08 crc kubenswrapper[4860]: I1014 14:49:08.512970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:09 crc kubenswrapper[4860]: E1014 14:49:09.127497 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 14:49:09 crc kubenswrapper[4860]: I1014 14:49:09.695158 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 14 14:49:09 crc kubenswrapper[4860]: I1014 14:49:09.695375 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:09 crc kubenswrapper[4860]: I1014 14:49:09.696825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:09 crc kubenswrapper[4860]: I1014 14:49:09.696864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:09 crc kubenswrapper[4860]: I1014 14:49:09.696873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.447447 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.447651 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.450009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.450099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.450120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.455170 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.701841 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.702016 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.703375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.703482 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:10 crc kubenswrapper[4860]: I1014 14:49:10.703510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:11 crc kubenswrapper[4860]: I1014 14:49:11.152889 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:11 crc kubenswrapper[4860]: I1014 14:49:11.153926 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:11 crc kubenswrapper[4860]: I1014 14:49:11.153974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:11 crc kubenswrapper[4860]: I1014 14:49:11.153992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:11 crc kubenswrapper[4860]: I1014 14:49:11.157747 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:12 crc kubenswrapper[4860]: I1014 14:49:12.155167 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:12 crc kubenswrapper[4860]: I1014 14:49:12.156691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:12 crc kubenswrapper[4860]: I1014 14:49:12.156740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:12 crc kubenswrapper[4860]: I1014 14:49:12.156750 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:13 crc kubenswrapper[4860]: W1014 14:49:13.330678 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 14 14:49:13 crc kubenswrapper[4860]: I1014 14:49:13.330782 4860 trace.go:236] Trace[1214521899]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 14:49:03.329) (total time: 10001ms): Oct 14 14:49:13 crc kubenswrapper[4860]: Trace[1214521899]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:49:13.330) Oct 14 14:49:13 crc kubenswrapper[4860]: Trace[1214521899]: [10.001321069s] [10.001321069s] END Oct 14 14:49:13 crc kubenswrapper[4860]: E1014 14:49:13.330804 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 14 14:49:13 crc kubenswrapper[4860]: I1014 14:49:13.412401 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 14 14:49:13 crc kubenswrapper[4860]: I1014 14:49:13.412491 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 14:49:13 crc kubenswrapper[4860]: I1014 14:49:13.420465 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 14 14:49:13 crc kubenswrapper[4860]: I1014 14:49:13.420522 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.434316 4860 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.434401 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.666798 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.666967 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.668464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.668504 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.668517 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:15 crc kubenswrapper[4860]: I1014 14:49:15.676038 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:16 crc kubenswrapper[4860]: I1014 14:49:16.166329 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 14:49:16 crc kubenswrapper[4860]: I1014 14:49:16.166421 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:16 crc kubenswrapper[4860]: I1014 14:49:16.167825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:16 crc kubenswrapper[4860]: I1014 14:49:16.167885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:16 crc kubenswrapper[4860]: I1014 14:49:16.167904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.265642 4860 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 14 14:49:18 crc kubenswrapper[4860]: E1014 14:49:18.406876 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.409274 4860 trace.go:236] Trace[315189123]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 14:49:07.005) (total time: 11403ms): Oct 14 14:49:18 crc kubenswrapper[4860]: Trace[315189123]: ---"Objects listed" error: 11403ms (14:49:18.409) Oct 14 14:49:18 crc kubenswrapper[4860]: Trace[315189123]: [11.403411454s] [11.403411454s] END Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.409315 4860 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.409313 4860 trace.go:236] Trace[164705652]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 14:49:06.939) (total time: 11469ms): Oct 14 14:49:18 crc kubenswrapper[4860]: Trace[164705652]: ---"Objects listed" error: 11469ms (14:49:18.409) Oct 14 14:49:18 crc kubenswrapper[4860]: Trace[164705652]: [11.46953313s] [11.46953313s] END Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.409342 4860 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.410257 4860 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.412405 4860 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 14 14:49:18 crc kubenswrapper[4860]: E1014 14:49:18.418139 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.477647 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45488->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.477716 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45488->192.168.126.11:17697: read: connection reset by peer" Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.478188 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 14 14:49:18 crc kubenswrapper[4860]: I1014 14:49:18.478266 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.004159 4860 apiserver.go:52] "Watching apiserver" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.008229 4860 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.008595 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.008964 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.009175 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.009258 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.009333 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.009366 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.009406 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.009427 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.009468 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.009568 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013105 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013321 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013419 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013487 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013535 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013332 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013645 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.013693 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.016593 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.038649 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.052746 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.071627 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.086764 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.096126 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.105847 4860 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.108115 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115174 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115230 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115266 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115288 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115307 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115328 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115351 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115374 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115395 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115415 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115437 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115459 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115480 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115503 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115525 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115549 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115572 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115595 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115619 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115638 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115642 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115647 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115710 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115730 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115752 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115791 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115812 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115835 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115842 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115857 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115875 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115893 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115916 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115937 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115953 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115971 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115991 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116046 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116072 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116106 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116122 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116142 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116165 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116202 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116217 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116250 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116266 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116285 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116303 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116319 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116335 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115856 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.115950 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116089 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116096 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116149 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116269 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116287 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116296 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116339 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116471 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116456 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116539 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116614 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116701 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116768 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116854 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116965 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116991 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117087 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117118 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117179 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117334 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117377 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117458 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117527 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117578 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117589 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117723 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117739 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117860 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.116352 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117892 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117927 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117952 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117971 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.117994 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118017 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118054 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118073 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118095 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118114 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118132 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118149 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118166 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118187 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118205 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118224 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118241 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118260 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118292 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118297 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118312 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118339 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118361 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118381 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118382 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118395 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118445 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118473 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118505 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118531 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118555 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118578 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118602 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118642 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118675 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118700 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118725 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118789 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118815 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118839 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118862 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118905 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118932 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118957 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.118981 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119002 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119054 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119081 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119105 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119129 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119153 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119176 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119202 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119239 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119263 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119285 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119305 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119326 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119346 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119354 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119367 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119392 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119435 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119463 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119488 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119510 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119534 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119558 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119581 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119607 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119629 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119650 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119676 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119702 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119724 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119778 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119803 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119851 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119872 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119895 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119919 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119941 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119963 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.119986 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120007 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120372 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120796 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120833 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120856 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120881 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120904 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120952 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120975 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.120998 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121022 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121062 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121083 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121097 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121109 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121159 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121207 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121224 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121243 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121304 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121325 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121342 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121346 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121359 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121379 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121400 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121466 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121501 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121525 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121545 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121564 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121585 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121605 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121648 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121669 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121687 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121706 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121725 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121770 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121792 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121813 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121815 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121831 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121850 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121868 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121890 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121900 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121910 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121929 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121946 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121965 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.121981 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122000 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122016 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122108 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122127 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122261 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122285 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122308 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122334 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122356 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122382 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122438 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122474 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122514 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122541 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122643 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122667 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122696 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122725 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122781 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122806 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122853 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122876 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122955 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122967 4860 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122978 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122987 4860 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122997 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123006 4860 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123016 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123040 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123051 4860 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125072 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125097 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125111 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125127 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125140 4860 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125154 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125168 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125185 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125203 4860 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125216 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125229 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125244 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125257 4860 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125269 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125284 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125297 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125312 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125325 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125338 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125350 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125364 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125377 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125390 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125402 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125414 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125426 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125438 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125452 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125465 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125478 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125491 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125504 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125516 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125528 4860 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125541 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133792 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.134582 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122156 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122310 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.137069 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.122871 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123441 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123637 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.123848 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124021 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.137254 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124270 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124305 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124377 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124641 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124830 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124841 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.124963 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125072 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125106 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.125313 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.126275 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.137748 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.126505 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.126516 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127553 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127627 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127625 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127726 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127739 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127788 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.127831 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133195 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133438 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133589 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133723 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133885 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133948 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.133963 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.134186 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.134260 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.134425 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.134856 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135051 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138231 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135138 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135288 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135428 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135577 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135591 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135737 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.135735 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.136416 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.136590 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.137980 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138121 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138407 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138425 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138541 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138647 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138961 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138986 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.139875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.140122 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.140445 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.140641 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.140787 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:19.640692893 +0000 UTC m=+21.227476412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.140883 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.141205 4860 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.141387 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.141446 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.141634 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.141910 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.142335 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.142634 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.143059 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.143557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.143601 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.143624 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.143796 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.144075 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145213 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145289 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145396 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145627 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145656 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145807 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145871 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145873 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.145891 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.146191 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.146212 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.146375 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.146415 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.146554 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.146915 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.147015 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.147249 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.149281 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.149530 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.150145 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.150176 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.150653 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.150699 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.150881 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.150896 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.151115 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.151368 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.151406 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.152136 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.152157 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.152317 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.152435 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.156106 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.156471 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157084 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157102 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157216 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157328 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157354 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157492 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.157582 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.158126 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:49:19.658101987 +0000 UTC m=+21.244885436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.158455 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.158465 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.158490 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.138248 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.158650 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.159256 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.160471 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.160579 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.160647 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:19.660621809 +0000 UTC m=+21.247405258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.161049 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.161258 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.172754 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.173044 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.173103 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.173157 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.173367 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.173648 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.174106 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.175581 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.176329 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.176355 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.176372 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.176534 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.176925 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.176488 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.176647 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:19.676430845 +0000 UTC m=+21.263214374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.177237 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.177337 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.177718 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wjnk2"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.178253 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dcr2g"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.178502 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.178860 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.181391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.186645 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.187392 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.188313 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.188895 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190076 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190311 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190542 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190802 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.191197 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190864 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190935 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.190970 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.191039 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.191623 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.192151 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.192708 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.193086 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.193166 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.197481 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.197523 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.197540 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.197631 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:19.697592531 +0000 UTC m=+21.284375990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.202949 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.205783 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e" exitCode=255 Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.205843 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e"} Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.210683 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.216559 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.222521 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.223402 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.224072 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.232788 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233558 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233595 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-netns\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233630 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-system-cni-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233638 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233644 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-os-release\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233692 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233695 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-etc-kubernetes\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-cni-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233756 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-conf-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233771 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfldp\" (UniqueName: \"kubernetes.io/projected/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-kube-api-access-dfldp\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233786 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-k8s-cni-cncf-io\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233800 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-cni-bin\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233815 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-hostroot\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233828 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-daemon-config\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233842 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-multus-certs\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233863 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-cnibin\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233875 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-cni-binary-copy\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233889 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-kubelet\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233911 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6385a106-293c-455e-99ef-9810b91fec6d-hosts-file\") pod \"node-resolver-wjnk2\" (UID: \"6385a106-293c-455e-99ef-9810b91fec6d\") " pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233926 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfft\" (UniqueName: \"kubernetes.io/projected/6385a106-293c-455e-99ef-9810b91fec6d-kube-api-access-kbfft\") pod \"node-resolver-wjnk2\" (UID: \"6385a106-293c-455e-99ef-9810b91fec6d\") " pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233940 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-socket-dir-parent\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.233960 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-cni-multus\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234231 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234246 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234260 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234273 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234285 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234297 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234309 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234320 4860 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234332 4860 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234345 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234357 4860 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234367 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234378 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234389 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234399 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234408 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234419 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234429 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234439 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234452 4860 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234463 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234474 4860 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234485 4860 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234496 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234507 4860 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234518 4860 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234528 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234538 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234548 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234556 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234565 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234573 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234582 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234591 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234599 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234607 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234616 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234624 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234633 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234642 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234650 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234657 4860 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234665 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234673 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234682 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234690 4860 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234700 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234710 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234720 4860 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234731 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234743 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234753 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234763 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234775 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234786 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234828 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234839 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234852 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234863 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234874 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234905 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234918 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234930 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234942 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234953 4860 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234964 4860 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234974 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234985 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.234995 4860 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235005 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235052 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235065 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235074 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235082 4860 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235091 4860 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235100 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235108 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235116 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235126 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235134 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235142 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235150 4860 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235158 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235166 4860 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235174 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235182 4860 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235191 4860 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235199 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235208 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235217 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235225 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235233 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235241 4860 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235249 4860 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235257 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235265 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235273 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235281 4860 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235290 4860 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235298 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235307 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235315 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235323 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235331 4860 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235339 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235346 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235354 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235364 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235372 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235380 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235390 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235398 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235406 4860 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235414 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235422 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235431 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235440 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235448 4860 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235456 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235465 4860 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235473 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235481 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235490 4860 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235501 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235511 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235521 4860 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235532 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235543 4860 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235553 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235564 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235575 4860 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235586 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235597 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235608 4860 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235620 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235631 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235642 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235653 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235665 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235677 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235687 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235698 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235708 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235717 4860 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235725 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235734 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.235742 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.238369 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.253774 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.257714 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.262005 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.267237 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.268290 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.286215 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.296422 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.307550 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.316817 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.327391 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.332427 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336239 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6385a106-293c-455e-99ef-9810b91fec6d-hosts-file\") pod \"node-resolver-wjnk2\" (UID: \"6385a106-293c-455e-99ef-9810b91fec6d\") " pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336310 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbfft\" (UniqueName: \"kubernetes.io/projected/6385a106-293c-455e-99ef-9810b91fec6d-kube-api-access-kbfft\") pod \"node-resolver-wjnk2\" (UID: \"6385a106-293c-455e-99ef-9810b91fec6d\") " pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336330 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-socket-dir-parent\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336367 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-cni-multus\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336383 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-netns\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336401 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-system-cni-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336404 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6385a106-293c-455e-99ef-9810b91fec6d-hosts-file\") pod \"node-resolver-wjnk2\" (UID: \"6385a106-293c-455e-99ef-9810b91fec6d\") " pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336432 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-os-release\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336449 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-etc-kubernetes\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-conf-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336479 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-cni-multus\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336513 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-cni-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336532 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfldp\" (UniqueName: \"kubernetes.io/projected/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-kube-api-access-dfldp\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336550 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-k8s-cni-cncf-io\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336566 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-cni-bin\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336597 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-hostroot\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336611 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-daemon-config\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-multus-certs\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336650 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-cnibin\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336681 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-cni-binary-copy\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336704 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-kubelet\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336747 4860 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336760 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336770 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336801 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-socket-dir-parent\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-kubelet\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336867 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-k8s-cni-cncf-io\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336903 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-var-lib-cni-bin\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336929 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-hostroot\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.336979 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-netns\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337018 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-system-cni-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337071 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-os-release\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-etc-kubernetes\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337117 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-conf-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337236 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-cni-dir\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337275 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-cnibin\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337297 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-host-run-multus-certs\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337710 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-multus-daemon-config\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.337794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-cni-binary-copy\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.338176 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 14:49:19 crc kubenswrapper[4860]: W1014 14:49:19.339276 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-2d1f07976200c1b9437c51bc8a063b4c15bbbec6a78d57bd8685b8814554ee4c WatchSource:0}: Error finding container 2d1f07976200c1b9437c51bc8a063b4c15bbbec6a78d57bd8685b8814554ee4c: Status 404 returned error can't find the container with id 2d1f07976200c1b9437c51bc8a063b4c15bbbec6a78d57bd8685b8814554ee4c Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.344109 4860 scope.go:117] "RemoveContainer" containerID="a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.348619 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.350915 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.386327 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.402737 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfldp\" (UniqueName: \"kubernetes.io/projected/ceb09eae-57c9-4a8e-95d5-aa40e49f7316-kube-api-access-dfldp\") pod \"multus-dcr2g\" (UID: \"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\") " pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.424011 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbfft\" (UniqueName: \"kubernetes.io/projected/6385a106-293c-455e-99ef-9810b91fec6d-kube-api-access-kbfft\") pod \"node-resolver-wjnk2\" (UID: \"6385a106-293c-455e-99ef-9810b91fec6d\") " pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.428553 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.479458 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.509317 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.518537 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dcr2g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.535375 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.548842 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wjnk2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.590286 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6ldv4"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.590758 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.599973 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.600366 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.600519 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.600622 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.600721 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.601480 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vqrjw"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.611608 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mdvx2"] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.612738 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.618522 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.624668 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.629837 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.638269 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.638733 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.638884 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.639021 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.639212 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.639398 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.639532 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.639673 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.639861 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640393 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6436186e-e1ba-4c37-b8f9-210de837a051-mcd-auth-proxy-config\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640436 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-os-release\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640453 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640488 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640512 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6436186e-e1ba-4c37-b8f9-210de837a051-proxy-tls\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640533 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-system-cni-dir\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640572 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x22d\" (UniqueName: \"kubernetes.io/projected/6436186e-e1ba-4c37-b8f9-210de837a051-kube-api-access-2x22d\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640593 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cnibin\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640611 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6436186e-e1ba-4c37-b8f9-210de837a051-rootfs\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640628 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cni-binary-copy\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.640652 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5p7c\" (UniqueName: \"kubernetes.io/projected/070393d9-65ec-4cf1-a04a-c3eb9addbf91-kube-api-access-j5p7c\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.670445 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.684557 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.692947 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.701616 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.722119 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.739548 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.740921 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741216 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-config\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741239 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741255 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-netns\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741278 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-log-socket\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741339 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6436186e-e1ba-4c37-b8f9-210de837a051-proxy-tls\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741357 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-systemd\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741370 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-bin\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741377 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741522 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-system-cni-dir\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.741794 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:49:20.741755104 +0000 UTC m=+22.328538593 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.741852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-system-cni-dir\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742043 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x22d\" (UniqueName: \"kubernetes.io/projected/6436186e-e1ba-4c37-b8f9-210de837a051-kube-api-access-2x22d\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742306 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-slash\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742425 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cnibin\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.742214 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.742557 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.742575 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.742648 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:20.742624746 +0000 UTC m=+22.329408355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742495 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cnibin\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742735 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6436186e-e1ba-4c37-b8f9-210de837a051-rootfs\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742524 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6436186e-e1ba-4c37-b8f9-210de837a051-rootfs\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.742922 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovn-node-metrics-cert\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743055 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg7wr\" (UniqueName: \"kubernetes.io/projected/87a92ec1-e2b0-407d-990e-ce52a980b64b-kube-api-access-cg7wr\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743193 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cni-binary-copy\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743281 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-node-log\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743366 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5p7c\" (UniqueName: \"kubernetes.io/projected/070393d9-65ec-4cf1-a04a-c3eb9addbf91-kube-api-access-j5p7c\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743468 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743565 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743649 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-netd\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743729 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-env-overrides\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743820 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-etc-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.743904 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744006 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-var-lib-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744132 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-script-lib\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.744355 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.744412 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:20.744395549 +0000 UTC m=+22.331179078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.744452 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.744509 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:20.74447519 +0000 UTC m=+22.331258739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744580 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6436186e-e1ba-4c37-b8f9-210de837a051-mcd-auth-proxy-config\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744675 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-systemd-units\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744800 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.744910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-os-release\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.745005 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.745106 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6436186e-e1ba-4c37-b8f9-210de837a051-mcd-auth-proxy-config\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.745109 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-kubelet\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.745182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.745214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-ovn\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.745335 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.745379 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.745392 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: E1014 14:49:19.745429 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:20.745416804 +0000 UTC m=+22.332200323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.745786 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/070393d9-65ec-4cf1-a04a-c3eb9addbf91-os-release\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.746020 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.746524 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/070393d9-65ec-4cf1-a04a-c3eb9addbf91-cni-binary-copy\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.747610 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6436186e-e1ba-4c37-b8f9-210de837a051-proxy-tls\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.758889 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.763572 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x22d\" (UniqueName: \"kubernetes.io/projected/6436186e-e1ba-4c37-b8f9-210de837a051-kube-api-access-2x22d\") pod \"machine-config-daemon-6ldv4\" (UID: \"6436186e-e1ba-4c37-b8f9-210de837a051\") " pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.763678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5p7c\" (UniqueName: \"kubernetes.io/projected/070393d9-65ec-4cf1-a04a-c3eb9addbf91-kube-api-access-j5p7c\") pod \"multus-additional-cni-plugins-vqrjw\" (UID: \"070393d9-65ec-4cf1-a04a-c3eb9addbf91\") " pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.775895 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.786914 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.814428 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-slash\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846406 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovn-node-metrics-cert\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846424 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg7wr\" (UniqueName: \"kubernetes.io/projected/87a92ec1-e2b0-407d-990e-ce52a980b64b-kube-api-access-cg7wr\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846451 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-node-log\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846487 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-netd\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846503 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-env-overrides\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846518 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846534 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-etc-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846549 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-var-lib-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846580 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-script-lib\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846597 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-systemd-units\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846613 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846644 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-kubelet\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846667 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-ovn\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846682 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-config\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846696 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-netns\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846710 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-log-socket\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846727 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-systemd\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846743 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-bin\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846810 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-bin\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.846848 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-slash\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.847551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-kubelet\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.847562 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-var-lib-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.848147 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-script-lib\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.848192 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-systemd-units\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.848217 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.848299 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-node-log\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.848657 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-netd\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849318 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-env-overrides\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849381 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849415 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-etc-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-openvswitch\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849668 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849854 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-netns\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.849904 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-ovn\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.850405 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-config\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.850459 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-log-socket\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.850496 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-systemd\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.852739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovn-node-metrics-cert\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.864479 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.874279 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg7wr\" (UniqueName: \"kubernetes.io/projected/87a92ec1-e2b0-407d-990e-ce52a980b64b-kube-api-access-cg7wr\") pod \"ovnkube-node-mdvx2\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.883173 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.896561 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.905326 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.924198 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.934591 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.949675 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.950968 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: W1014 14:49:19.959487 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6436186e_e1ba_4c37_b8f9_210de837a051.slice/crio-1419f865e7c4fd62dfb2331bb4f6d14486d89d2bb831188eea88c5ab548bdeed WatchSource:0}: Error finding container 1419f865e7c4fd62dfb2331bb4f6d14486d89d2bb831188eea88c5ab548bdeed: Status 404 returned error can't find the container with id 1419f865e7c4fd62dfb2331bb4f6d14486d89d2bb831188eea88c5ab548bdeed Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.974402 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:19 crc kubenswrapper[4860]: I1014 14:49:19.989642 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.027233 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.030581 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.211139 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wjnk2" event={"ID":"6385a106-293c-455e-99ef-9810b91fec6d","Type":"ContainerStarted","Data":"73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.211220 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wjnk2" event={"ID":"6385a106-293c-455e-99ef-9810b91fec6d","Type":"ContainerStarted","Data":"2e518965b2141b4971decdf577c582d35d6b9e11527132594da918e295e214a4"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.213119 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"1419f865e7c4fd62dfb2331bb4f6d14486d89d2bb831188eea88c5ab548bdeed"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.219999 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.220060 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"934183c9a9999351f22d73073c0fcf87bf68243e115e8c20ac5f2ddb1a10a152"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.224867 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"510129213ff2770e5739d4744ae8a94cb4b7258c42f78253abecc657388e6fef"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.230001 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.246891 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.249151 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.252952 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.253360 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.260357 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.260412 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2d1f07976200c1b9437c51bc8a063b4c15bbbec6a78d57bd8685b8814554ee4c"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.264416 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.265090 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"a0359c23fb4b3be298dd011d31a8e240dc19f6b215a2faf49d6ded851aea9021"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.266517 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerStarted","Data":"ba378cb381b02ffcce1137e5cf010fbf188461d891b34b8e406f0213645787b1"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.273824 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerStarted","Data":"854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.273884 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerStarted","Data":"a0354dc0d262ddd74887b39837051354c4f404609a7a8e33953e57a943f6dd27"} Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.277871 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.291202 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.304179 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.314061 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.332984 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.355154 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.370758 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.388632 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.404329 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.420425 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.432228 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.443319 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.461035 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.473370 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.484624 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.510486 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.538290 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.554868 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.576606 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.592302 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.653672 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.743331 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.754465 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.756668 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.780787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.780863 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.780922 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.780988 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781129 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:49:22.781077728 +0000 UTC m=+24.367861187 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781135 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781206 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781252 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781264 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:22.781239852 +0000 UTC m=+24.368023301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781255 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781340 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781216 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781416 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781426 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781322 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:22.781299664 +0000 UTC m=+24.368083183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781486 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:22.781462748 +0000 UTC m=+24.368246267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:20 crc kubenswrapper[4860]: E1014 14:49:20.781504 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:22.781496338 +0000 UTC m=+24.368279897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.789132 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.793449 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.804859 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.805735 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.817551 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.838457 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.854347 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.893132 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.933291 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:20 crc kubenswrapper[4860]: I1014 14:49:20.982696 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.019109 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.057928 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.061182 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.061213 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:21 crc kubenswrapper[4860]: E1014 14:49:21.061316 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.061360 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:21 crc kubenswrapper[4860]: E1014 14:49:21.061456 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:21 crc kubenswrapper[4860]: E1014 14:49:21.061618 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.065769 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.066615 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.068129 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.068905 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.069984 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.070529 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.071159 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.072186 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.072826 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.073948 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.074593 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.075834 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.076490 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.077384 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.078461 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.079142 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.080326 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.080730 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.081315 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.082960 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.083660 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.084948 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.085494 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.086757 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.087910 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.088605 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.089820 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.090410 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.091643 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.092260 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.093306 4860 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.093426 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.095368 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.096628 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.097374 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.097459 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.099427 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.100292 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.101517 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.102325 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.103606 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.104135 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.105108 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.105767 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.106781 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.107319 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.108466 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.109155 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.111157 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.111769 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.112686 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.113219 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.114514 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.115246 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.115839 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.116838 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2thzv"] Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.117408 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.132943 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.146825 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.167096 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.185053 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05162975-38db-40bf-9eb5-4d9bc165cb83-host\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.185351 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcd9c\" (UniqueName: \"kubernetes.io/projected/05162975-38db-40bf-9eb5-4d9bc165cb83-kube-api-access-vcd9c\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.185463 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05162975-38db-40bf-9eb5-4d9bc165cb83-serviceca\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.186835 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.207854 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.256015 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.278632 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6"} Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.278738 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81"} Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.280313 4860 generic.go:334] "Generic (PLEG): container finished" podID="070393d9-65ec-4cf1-a04a-c3eb9addbf91" containerID="8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491" exitCode=0 Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.280380 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerDied","Data":"8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491"} Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.282269 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e"} Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.283729 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" exitCode=0 Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.284227 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.285845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05162975-38db-40bf-9eb5-4d9bc165cb83-host\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.285976 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcd9c\" (UniqueName: \"kubernetes.io/projected/05162975-38db-40bf-9eb5-4d9bc165cb83-kube-api-access-vcd9c\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.286083 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05162975-38db-40bf-9eb5-4d9bc165cb83-serviceca\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.286003 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05162975-38db-40bf-9eb5-4d9bc165cb83-host\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.286961 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05162975-38db-40bf-9eb5-4d9bc165cb83-serviceca\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.290884 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.324787 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcd9c\" (UniqueName: \"kubernetes.io/projected/05162975-38db-40bf-9eb5-4d9bc165cb83-kube-api-access-vcd9c\") pod \"node-ca-2thzv\" (UID: \"05162975-38db-40bf-9eb5-4d9bc165cb83\") " pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.357001 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.399133 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.428795 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2thzv" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.442380 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.478002 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.518126 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.553552 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.601844 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.635243 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.673540 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.715299 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.758430 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.795542 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.833858 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.914983 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.959886 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.978214 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:21 crc kubenswrapper[4860]: I1014 14:49:21.993514 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:21Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.031674 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.078537 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.120484 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.159097 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.194687 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.235829 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.276951 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.289122 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerStarted","Data":"b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.294046 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.294102 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.294114 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.294125 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.295689 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.297497 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2thzv" event={"ID":"05162975-38db-40bf-9eb5-4d9bc165cb83","Type":"ContainerStarted","Data":"92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.297541 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2thzv" event={"ID":"05162975-38db-40bf-9eb5-4d9bc165cb83","Type":"ContainerStarted","Data":"a9af5fa892fc5918a1d992fbb3e1017e6a8ab5732aa9ffebc928c06d7f9778b9"} Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.316860 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.354836 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.395068 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.438063 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.439113 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.443606 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.474183 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.496422 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.536387 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.581759 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.617732 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.655618 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.705832 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.743734 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.775686 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.800452 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.800602 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.800642 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.800701 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.800731 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.800864 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.800932 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:26.800910582 +0000 UTC m=+28.387694031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.800996 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:49:26.800987933 +0000 UTC m=+28.387771382 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801089 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801131 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801146 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801176 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:26.801167178 +0000 UTC m=+28.387950637 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801238 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801252 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801264 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801296 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:26.801286621 +0000 UTC m=+28.388070070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801338 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: E1014 14:49:22.801369 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:26.801360213 +0000 UTC m=+28.388143662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.814566 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.855488 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.893843 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.933688 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:22 crc kubenswrapper[4860]: I1014 14:49:22.975805 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.014634 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.056048 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.061339 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.061363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.061370 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:23 crc kubenswrapper[4860]: E1014 14:49:23.061566 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:23 crc kubenswrapper[4860]: E1014 14:49:23.061650 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:23 crc kubenswrapper[4860]: E1014 14:49:23.061770 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.095809 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.137344 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.180157 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.219897 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.261919 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.295723 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.302009 4860 generic.go:334] "Generic (PLEG): container finished" podID="070393d9-65ec-4cf1-a04a-c3eb9addbf91" containerID="b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9" exitCode=0 Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.302091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerDied","Data":"b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9"} Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.307878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.307932 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.339088 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.379173 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.459743 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.476410 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.505323 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.544549 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.578191 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.614190 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.652965 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.692988 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.735780 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.781022 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.814066 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.856736 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.894426 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.941786 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:23 crc kubenswrapper[4860]: I1014 14:49:23.975212 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.014514 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.054287 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.311945 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerStarted","Data":"a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9"} Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.324349 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.343078 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.355995 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.367674 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.377997 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.386541 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.397421 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.415430 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.426724 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.456252 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.493512 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.531848 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.582382 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.614257 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.654249 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.818229 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.820161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.820194 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.820204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.820326 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.838122 4860 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.838458 4860 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.839681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.839714 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.839721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.839736 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.839747 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:24Z","lastTransitionTime":"2025-10-14T14:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:24 crc kubenswrapper[4860]: E1014 14:49:24.892844 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.900200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.900235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.900242 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.900257 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.900266 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:24Z","lastTransitionTime":"2025-10-14T14:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:24 crc kubenswrapper[4860]: E1014 14:49:24.926543 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.931263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.931297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.931306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.931321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.931330 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:24Z","lastTransitionTime":"2025-10-14T14:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:24 crc kubenswrapper[4860]: E1014 14:49:24.951373 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.956720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.956759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.956768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.956783 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.956793 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:24Z","lastTransitionTime":"2025-10-14T14:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:24 crc kubenswrapper[4860]: E1014 14:49:24.982002 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.985236 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.985357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.985417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.985511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:24 crc kubenswrapper[4860]: I1014 14:49:24.985579 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:24Z","lastTransitionTime":"2025-10-14T14:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:24 crc kubenswrapper[4860]: E1014 14:49:24.998095 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:24 crc kubenswrapper[4860]: E1014 14:49:24.998265 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.000294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.000326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.000336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.000351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.000360 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.061556 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:25 crc kubenswrapper[4860]: E1014 14:49:25.062063 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.061898 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:25 crc kubenswrapper[4860]: E1014 14:49:25.062139 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.061783 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:25 crc kubenswrapper[4860]: E1014 14:49:25.062189 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.102557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.102593 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.102601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.102618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.102630 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.204446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.204486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.204495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.204510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.204518 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.307309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.307351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.307360 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.307378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.307388 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.316100 4860 generic.go:334] "Generic (PLEG): container finished" podID="070393d9-65ec-4cf1-a04a-c3eb9addbf91" containerID="a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9" exitCode=0 Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.316183 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerDied","Data":"a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.330816 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.341637 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.359563 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.370800 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.381521 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.394579 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.405690 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.409061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.409101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.409113 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.409130 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.409143 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.422593 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.434211 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.444281 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.452901 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.468903 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.480409 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.493575 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.503500 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:25Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.511512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.511549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.511560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.511578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.511590 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.613231 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.613275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.613287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.613303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.613314 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.715665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.715712 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.715725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.715745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.715755 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.818000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.818056 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.818066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.818085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.818096 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.920933 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.921231 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.921349 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.921433 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:25 crc kubenswrapper[4860]: I1014 14:49:25.921508 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:25Z","lastTransitionTime":"2025-10-14T14:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.024544 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.024583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.024590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.024608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.024620 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.130242 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.130833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.130902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.130923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.130935 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.234973 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.235019 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.235046 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.235064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.235077 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.324426 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.337102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.337147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.337156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.337176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.337186 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.439213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.439259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.439271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.439288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.439299 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.541945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.541985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.541996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.542014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.542040 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.644635 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.644679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.644689 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.644706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.644716 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.747244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.747288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.747298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.747317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.747331 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.840968 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.841100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.841141 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.841167 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841197 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:49:34.841165244 +0000 UTC m=+36.427948693 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.841242 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841278 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841324 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841340 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:34.841323288 +0000 UTC m=+36.428106737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841286 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841374 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:34.841358909 +0000 UTC m=+36.428142358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841380 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841380 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841396 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841399 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841408 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841429 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:34.8414192 +0000 UTC m=+36.428202659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:26 crc kubenswrapper[4860]: E1014 14:49:26.841445 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:34.8414387 +0000 UTC m=+36.428222269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.849362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.849401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.849413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.849431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.849442 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.951740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.951771 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.951780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.951794 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:26 crc kubenswrapper[4860]: I1014 14:49:26.951804 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:26Z","lastTransitionTime":"2025-10-14T14:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.054481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.054522 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.054531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.054546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.054557 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.060854 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.061159 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.061246 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:27 crc kubenswrapper[4860]: E1014 14:49:27.061391 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:27 crc kubenswrapper[4860]: E1014 14:49:27.061575 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:27 crc kubenswrapper[4860]: E1014 14:49:27.061699 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.156494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.156580 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.156590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.156609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.156619 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.259532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.259743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.259870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.259972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.260062 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.363539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.364398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.364412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.364431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.364444 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.467956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.468007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.468021 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.468076 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.468091 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.570879 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.570942 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.570957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.570981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.570997 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.674646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.674701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.674719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.674746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.674766 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.777440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.777481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.777493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.777512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.777523 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.881631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.881713 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.881736 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.881774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.881797 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.985850 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.985917 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.985937 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.985963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:27 crc kubenswrapper[4860]: I1014 14:49:27.985983 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:27Z","lastTransitionTime":"2025-10-14T14:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.089330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.089430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.089459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.089538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.089565 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.192676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.192715 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.192726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.192741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.192751 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.298591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.298656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.298668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.298689 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.298702 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.336098 4860 generic.go:334] "Generic (PLEG): container finished" podID="070393d9-65ec-4cf1-a04a-c3eb9addbf91" containerID="e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d" exitCode=0 Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.336160 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerDied","Data":"e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.367464 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.385911 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.401763 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.401807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.401817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.401840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.401854 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.408222 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.427196 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.445174 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.458669 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.476632 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.491191 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.504398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.504450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.504463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.504483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.504497 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.510884 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.529006 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.546692 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.562631 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.575787 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.590668 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.608208 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.608256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.608268 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.608287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.608303 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.615064 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:28Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.710746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.710810 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.710830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.710858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.710882 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.814079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.814660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.814740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.814826 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.814883 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.918788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.918838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.918851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.918873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:28 crc kubenswrapper[4860]: I1014 14:49:28.918888 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:28Z","lastTransitionTime":"2025-10-14T14:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.022327 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.022398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.022422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.022449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.022468 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.063256 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:29 crc kubenswrapper[4860]: E1014 14:49:29.063395 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.063441 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:29 crc kubenswrapper[4860]: E1014 14:49:29.063564 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.064856 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:29 crc kubenswrapper[4860]: E1014 14:49:29.064928 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.088786 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.105783 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.126942 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.127008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.127018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.127067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.127079 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.128491 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.145412 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.167119 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.187078 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.202464 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.217454 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.229376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.229430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.229440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.229458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.229470 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.235328 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.257442 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.275289 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.287853 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.300691 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.312181 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.320681 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.331120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.331163 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.331174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.331190 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.331200 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.341733 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.346570 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.347668 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.347722 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.369571 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.383108 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.399393 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.413614 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.425264 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.434957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.435003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.435017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.435073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.435097 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.441676 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.458712 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.507420 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.537990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.538231 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.538298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.538361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.538423 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.550847 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.572468 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.587491 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.600607 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.613603 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.625408 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.641686 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.641728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.641740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.641758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.641768 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.644678 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.744485 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.744813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.744901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.745000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.745157 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.848560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.848612 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.848623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.848643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.848654 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.923010 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.925009 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.940777 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.951829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.952098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.952212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.952292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.952362 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:29Z","lastTransitionTime":"2025-10-14T14:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.959146 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.982112 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:29 crc kubenswrapper[4860]: I1014 14:49:29.999374 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.017438 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.040789 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.055350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.055386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.055403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.055420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.055432 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.113320 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.136394 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.154246 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.158275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.158333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.158348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.158378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.158392 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.170572 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.192927 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.224932 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.244445 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.261395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.261441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.261449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.261466 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.261478 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.263233 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.280735 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.298089 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.324641 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.340942 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.350559 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerStarted","Data":"96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.357690 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.363797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.363850 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.363860 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.363879 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.363890 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.384741 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.405555 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.419881 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.432461 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.456280 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.466841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.466871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.466880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.466896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.466905 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.480145 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.494004 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.505999 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.520657 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.535741 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.552651 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:30Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.569504 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.569540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.569549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.569564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.569574 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.672014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.672083 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.672097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.672116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.672129 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.775720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.775768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.775777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.775798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.775810 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.879247 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.879300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.879313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.879331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.879346 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.982691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.982728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.982738 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.982755 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:30 crc kubenswrapper[4860]: I1014 14:49:30.982765 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:30Z","lastTransitionTime":"2025-10-14T14:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.061561 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.061627 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:31 crc kubenswrapper[4860]: E1014 14:49:31.061740 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:31 crc kubenswrapper[4860]: E1014 14:49:31.061851 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.061882 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:31 crc kubenswrapper[4860]: E1014 14:49:31.061938 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.086248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.086313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.086325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.086343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.086354 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.189531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.189608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.189634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.189665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.189687 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.292666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.292721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.292730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.292747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.292757 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.363991 4860 generic.go:334] "Generic (PLEG): container finished" podID="070393d9-65ec-4cf1-a04a-c3eb9addbf91" containerID="96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93" exitCode=0 Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.364080 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerDied","Data":"96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.379648 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.394521 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.394588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.394599 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.394637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.394652 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.396461 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.411545 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.425608 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.442239 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.458627 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.472893 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.490897 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.499567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.499640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.499656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.499681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.499697 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.503428 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.516945 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.536070 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.550697 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.568010 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.580991 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.598870 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.602583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.602612 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.602620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.602636 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.602647 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.704520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.704558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.704567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.704586 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.704596 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.807274 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.807313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.807324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.807342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.807351 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.910282 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.910336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.910346 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.910363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.910374 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:31Z","lastTransitionTime":"2025-10-14T14:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.947249 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn"] Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.947773 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.950052 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.951436 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.970192 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:31 crc kubenswrapper[4860]: I1014 14:49:31.986428 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.002040 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:31Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.012528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.012587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.012601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.012622 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.012635 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.017805 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.031282 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd2cd739-fe15-4cc1-881e-a20faa721bb3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.031439 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd2cd739-fe15-4cc1-881e-a20faa721bb3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.031507 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9j7\" (UniqueName: \"kubernetes.io/projected/bd2cd739-fe15-4cc1-881e-a20faa721bb3-kube-api-access-kq9j7\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.031592 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd2cd739-fe15-4cc1-881e-a20faa721bb3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.046066 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.060407 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.073972 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.091913 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.105841 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.115333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.115371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.115380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.115396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.115406 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.123925 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.132138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd2cd739-fe15-4cc1-881e-a20faa721bb3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.132180 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd2cd739-fe15-4cc1-881e-a20faa721bb3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.132206 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9j7\" (UniqueName: \"kubernetes.io/projected/bd2cd739-fe15-4cc1-881e-a20faa721bb3-kube-api-access-kq9j7\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.132224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd2cd739-fe15-4cc1-881e-a20faa721bb3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.132786 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd2cd739-fe15-4cc1-881e-a20faa721bb3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.133419 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd2cd739-fe15-4cc1-881e-a20faa721bb3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.136385 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.144057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd2cd739-fe15-4cc1-881e-a20faa721bb3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.147709 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9j7\" (UniqueName: \"kubernetes.io/projected/bd2cd739-fe15-4cc1-881e-a20faa721bb3-kube-api-access-kq9j7\") pod \"ovnkube-control-plane-749d76644c-kxsqn\" (UID: \"bd2cd739-fe15-4cc1-881e-a20faa721bb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.154973 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.166463 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.177716 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.189236 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.201161 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.218399 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.218428 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.218437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.218453 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.218463 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.268409 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" Oct 14 14:49:32 crc kubenswrapper[4860]: W1014 14:49:32.281214 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd2cd739_fe15_4cc1_881e_a20faa721bb3.slice/crio-b6495f9465fe5c8cbd219009b18e7de7e11bca624a2e51e0cab26d4f1e2cc630 WatchSource:0}: Error finding container b6495f9465fe5c8cbd219009b18e7de7e11bca624a2e51e0cab26d4f1e2cc630: Status 404 returned error can't find the container with id b6495f9465fe5c8cbd219009b18e7de7e11bca624a2e51e0cab26d4f1e2cc630 Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.321341 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.321371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.321381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.321397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.321406 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.370221 4860 generic.go:334] "Generic (PLEG): container finished" podID="070393d9-65ec-4cf1-a04a-c3eb9addbf91" containerID="82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b" exitCode=0 Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.370299 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerDied","Data":"82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.371842 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" event={"ID":"bd2cd739-fe15-4cc1-881e-a20faa721bb3","Type":"ContainerStarted","Data":"b6495f9465fe5c8cbd219009b18e7de7e11bca624a2e51e0cab26d4f1e2cc630"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.388610 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.411631 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.424321 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.425737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.425800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.425810 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.425828 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.425838 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.436764 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.451736 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.461517 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.472111 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.486465 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.499120 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.511858 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.528311 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.528540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.528573 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.529370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.529403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.529415 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.546097 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.561084 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.575364 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.592262 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.611579 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:32Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.632629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.632679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.632688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.632704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.632715 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.736483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.736529 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.736542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.736568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.736581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.845957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.846072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.846109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.846160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.846281 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.949569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.949642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.949666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.949698 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:32 crc kubenswrapper[4860]: I1014 14:49:32.949721 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:32Z","lastTransitionTime":"2025-10-14T14:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.052708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.052785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.052808 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.052842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.052863 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.061123 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.061146 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:33 crc kubenswrapper[4860]: E1014 14:49:33.061279 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:33 crc kubenswrapper[4860]: E1014 14:49:33.061379 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.061467 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:33 crc kubenswrapper[4860]: E1014 14:49:33.061514 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.155916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.155982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.156003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.156060 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.156081 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.259439 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.259509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.259528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.259556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.259577 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.363260 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.363325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.363344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.363375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.363395 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.379046 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" event={"ID":"bd2cd739-fe15-4cc1-881e-a20faa721bb3","Type":"ContainerStarted","Data":"f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.385383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" event={"ID":"070393d9-65ec-4cf1-a04a-c3eb9addbf91","Type":"ContainerStarted","Data":"d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.440388 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vtscw"] Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.440844 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:33 crc kubenswrapper[4860]: E1014 14:49:33.440904 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.444649 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwnl\" (UniqueName: \"kubernetes.io/projected/2b36dd73-c75d-446e-85fe-d11afdd5a816-kube-api-access-7mwnl\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.444716 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.458785 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.465811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.465841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.465849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.465865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.465875 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.481111 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.495718 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.515163 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.530916 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.542094 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.545039 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwnl\" (UniqueName: \"kubernetes.io/projected/2b36dd73-c75d-446e-85fe-d11afdd5a816-kube-api-access-7mwnl\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.545107 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:33 crc kubenswrapper[4860]: E1014 14:49:33.545217 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:33 crc kubenswrapper[4860]: E1014 14:49:33.545275 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:34.045261357 +0000 UTC m=+35.632044806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.554900 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.568445 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.569724 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwnl\" (UniqueName: \"kubernetes.io/projected/2b36dd73-c75d-446e-85fe-d11afdd5a816-kube-api-access-7mwnl\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.570488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.570519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.570528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.570549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.570559 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.580163 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.597914 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.616507 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.629779 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.643507 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.659703 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.672611 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.672661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.672670 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.672685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.672695 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.674908 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.687021 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.699722 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:33Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.775100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.775152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.775166 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.775185 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.775204 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.877934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.878215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.878286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.878410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.878504 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.981204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.981254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.981263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.981280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:33 crc kubenswrapper[4860]: I1014 14:49:33.981293 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:33Z","lastTransitionTime":"2025-10-14T14:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.050077 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.050306 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.050412 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:35.05038525 +0000 UTC m=+36.637168759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.083932 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.084055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.084070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.084089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.084134 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.186954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.187019 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.187085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.187104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.187116 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.290015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.290084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.290097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.290122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.290137 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.298164 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.316782 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.333450 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.348529 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.374961 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.391763 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" event={"ID":"bd2cd739-fe15-4cc1-881e-a20faa721bb3","Type":"ContainerStarted","Data":"4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.392181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.392235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.392254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.392280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.392302 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.401245 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.439072 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.465835 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.482613 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.494866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.494906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.494915 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.494931 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.494945 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.500384 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.509834 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.518592 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.528935 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.538436 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.547907 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.558003 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.569444 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.577849 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.587469 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.595879 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.597491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.597585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.597602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.597623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.597635 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.606290 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.617586 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.627568 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.637914 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.650055 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.662554 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.675274 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.688320 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.699705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.699745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.699756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.699772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.699784 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.700946 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.711874 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.774837 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.795281 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.801637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.801684 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.801694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.801711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.801721 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.806588 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.818122 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.829331 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:34Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.856704 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.856791 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.856826 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.856861 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856889 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:49:50.856865298 +0000 UTC m=+52.443648747 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.856916 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856944 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856947 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856972 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856984 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:50.85697441 +0000 UTC m=+52.443757859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856986 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.857016 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.857052 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:50.857018192 +0000 UTC m=+52.443801641 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.856982 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.857108 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:50.857094003 +0000 UTC m=+52.443877452 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.857055 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.857127 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:34 crc kubenswrapper[4860]: E1014 14:49:34.857147 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:50.857141854 +0000 UTC m=+52.443925303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.903973 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.904042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.904057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.904077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:34 crc kubenswrapper[4860]: I1014 14:49:34.904087 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:34Z","lastTransitionTime":"2025-10-14T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.006482 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.006522 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.006557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.006579 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.006589 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.011894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.011939 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.011948 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.011965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.011975 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.028513 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.033549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.033595 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.033605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.033622 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.033632 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.046567 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.050014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.050060 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.050072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.050087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.050096 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.059382 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.059565 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.059679 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:37.059650321 +0000 UTC m=+38.646433830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.061138 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.061159 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.061309 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.061303 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.061350 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.061414 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.061555 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.061718 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.067259 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.070498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.070535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.070545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.070563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.070573 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.081256 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.085806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.085927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.085937 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.085953 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.085963 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.100067 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: E1014 14:49:35.100295 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.109306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.109354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.109366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.109383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.109400 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.211687 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.211738 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.211749 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.211765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.211776 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.314071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.314145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.314163 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.314192 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.314210 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.399206 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/0.log" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.401788 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2" exitCode=1 Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.402148 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.403055 4860 scope.go:117] "RemoveContainer" containerID="df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.417608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.417700 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.417722 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.417757 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.417808 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.422235 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.438181 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.451707 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.476785 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.504563 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.520999 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.522061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.522109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.522122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.522144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.522529 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.537900 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.550477 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.564169 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.575827 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.587911 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.608279 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.622336 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.624692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.624722 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.624733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.624750 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.624762 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.641125 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.653954 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.667201 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.681955 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.695737 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.708046 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.722292 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.727224 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.727267 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.727275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.727292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.727302 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.734069 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.746406 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.760151 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.773269 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.785524 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.813589 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"14:49:34.435089 5989 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:34.447395 5989 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:34.447436 5989 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:34.447465 5989 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 14:49:34.447471 5989 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 14:49:34.447502 5989 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:34.447512 5989 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:34.447521 5989 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:34.447529 5989 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:34.450654 5989 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:34.450695 5989 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:34.450714 5989 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 14:49:34.450742 5989 factory.go:656] Stopping watch factory\\\\nI1014 14:49:34.450790 5989 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:34.450826 5989 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 14:49:34.450841 5989 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.829329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.829366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.829375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.829389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.829399 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.835046 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.848752 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.859698 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.878183 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.891426 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.900591 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.910679 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.922957 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:35Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.931473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.931508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.931520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.931563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:35 crc kubenswrapper[4860]: I1014 14:49:35.931576 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:35Z","lastTransitionTime":"2025-10-14T14:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.040145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.040183 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.040193 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.040210 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.040222 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.143446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.143524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.143536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.143557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.143786 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.246813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.246876 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.246893 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.246920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.246938 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.349735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.349778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.349799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.349819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.349830 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.409707 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/0.log" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.412599 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.413063 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.436925 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.452706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.452749 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.452759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.452796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.452809 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.454636 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.478606 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.491253 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.504829 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.518998 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.530228 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.543325 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.553900 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.554960 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.554999 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.555010 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.555043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.555055 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.565465 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.579318 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.592608 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.604333 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.621973 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"14:49:34.435089 5989 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:34.447395 5989 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:34.447436 5989 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:34.447465 5989 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 14:49:34.447471 5989 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 14:49:34.447502 5989 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:34.447512 5989 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:34.447521 5989 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:34.447529 5989 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:34.450654 5989 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:34.450695 5989 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:34.450714 5989 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 14:49:34.450742 5989 factory.go:656] Stopping watch factory\\\\nI1014 14:49:34.450790 5989 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:34.450826 5989 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 14:49:34.450841 5989 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.636208 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.649082 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.657927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.657966 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.657976 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.657993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.658003 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.662199 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:36Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.760569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.760601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.760610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.760626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.760635 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.863107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.863150 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.863159 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.863176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.863187 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.965499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.965582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.965597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.965617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:36 crc kubenswrapper[4860]: I1014 14:49:36.965633 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:36Z","lastTransitionTime":"2025-10-14T14:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.061477 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.061486 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.061540 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.061689 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.061719 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.061803 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.061848 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.061904 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.067563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.067637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.067656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.067734 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.067774 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.078752 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.079068 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.079180 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:41.079146696 +0000 UTC m=+42.665930185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.169564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.169606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.169616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.169632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.169643 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.272016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.272135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.272153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.272180 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.272202 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.375407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.375499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.375515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.375540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.375557 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.418945 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/1.log" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.419574 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/0.log" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.422813 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f" exitCode=1 Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.422851 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.422913 4860 scope.go:117] "RemoveContainer" containerID="df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.424140 4860 scope.go:117] "RemoveContainer" containerID="32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f" Oct 14 14:49:37 crc kubenswrapper[4860]: E1014 14:49:37.424500 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.442602 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.453778 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.465555 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.479125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.479174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.479189 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.479211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.479229 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.492739 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.510715 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.522354 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.537272 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.556881 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.569383 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.582077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.582109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.582126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.582160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.582077 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.582185 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.593862 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.603553 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.620585 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df995e887249f8b6eb67280a463de7d15c7b9da9c13d706f09aab45fbaa4d5e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"message\\\":\\\"14:49:34.435089 5989 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:34.447395 5989 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:34.447436 5989 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:34.447465 5989 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 14:49:34.447471 5989 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 14:49:34.447502 5989 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:34.447512 5989 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:34.447521 5989 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:34.447529 5989 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:34.450654 5989 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:34.450695 5989 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:34.450714 5989 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 14:49:34.450742 5989 factory.go:656] Stopping watch factory\\\\nI1014 14:49:34.450790 5989 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:34.450826 5989 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 14:49:34.450841 5989 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.640674 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.652849 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.665055 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.674712 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:37Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.684548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.684591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.684608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.684629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.684644 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.786591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.786652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.786665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.786685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.786697 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.888813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.888869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.888880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.888902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.888915 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.992219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.992286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.992303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.992329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:37 crc kubenswrapper[4860]: I1014 14:49:37.992347 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:37Z","lastTransitionTime":"2025-10-14T14:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.095568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.095626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.095644 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.095670 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.095688 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.198357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.198419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.198442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.198474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.198495 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.301804 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.301883 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.301908 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.301942 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.301969 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.404494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.404553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.404564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.404584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.404597 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.429311 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/1.log" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.433987 4860 scope.go:117] "RemoveContainer" containerID="32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f" Oct 14 14:49:38 crc kubenswrapper[4860]: E1014 14:49:38.434200 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.456660 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.473314 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.485444 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.504288 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.507383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.507423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.507434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.507451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.507461 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.524397 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.542311 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.559610 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.584436 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.598458 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.610307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.610370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.610384 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.610405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.610423 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.614846 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.625370 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.639944 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.650718 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.661822 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.674304 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.687761 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.701775 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:38Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.712680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.712858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.712968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.713104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.713168 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.815614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.815652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.815664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.815693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.815705 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.919220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.919263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.919272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.919289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:38 crc kubenswrapper[4860]: I1014 14:49:38.919300 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:38Z","lastTransitionTime":"2025-10-14T14:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.021823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.021867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.021877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.021894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.021902 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.061605 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.061647 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.061652 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.061610 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:39 crc kubenswrapper[4860]: E1014 14:49:39.061819 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:39 crc kubenswrapper[4860]: E1014 14:49:39.061776 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:39 crc kubenswrapper[4860]: E1014 14:49:39.061962 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:39 crc kubenswrapper[4860]: E1014 14:49:39.062068 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.076183 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.096655 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.106966 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.117065 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.124156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.124215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.124233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.124257 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.124278 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.130589 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.143184 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.158801 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.168949 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.178391 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.193999 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.211768 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.223409 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.226620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.226649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.226657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.226672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.226681 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.234080 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.243227 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.253189 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.263304 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.273757 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:39Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.328842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.328880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.328889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.328904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.328913 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.430886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.430936 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.430964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.430981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.430990 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.532576 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.532610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.532617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.532640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.532650 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.634707 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.634765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.634783 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.634806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.634823 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.737338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.737387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.737401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.737421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.737435 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.839415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.839462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.839477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.839497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.839513 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.942114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.942468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.942636 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.942819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:39 crc kubenswrapper[4860]: I1014 14:49:39.942968 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:39Z","lastTransitionTime":"2025-10-14T14:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.045863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.045902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.045911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.045925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.045933 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.148475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.148823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.149017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.149255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.149420 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.251965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.252040 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.252049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.252066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.252078 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.354446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.354491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.354501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.354519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.354530 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.456925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.456968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.456979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.456995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.457008 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.559426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.559463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.559475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.559490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.559534 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.661624 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.661685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.661697 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.661718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.661734 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.763906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.763951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.763961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.763977 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.763987 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.866298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.866361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.866372 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.866398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.866409 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.968877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.968926 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.968936 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.968952 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:40 crc kubenswrapper[4860]: I1014 14:49:40.968985 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:40Z","lastTransitionTime":"2025-10-14T14:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.061123 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.061213 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.061241 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.061306 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:41 crc kubenswrapper[4860]: E1014 14:49:41.061361 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:41 crc kubenswrapper[4860]: E1014 14:49:41.061520 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:41 crc kubenswrapper[4860]: E1014 14:49:41.061545 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:41 crc kubenswrapper[4860]: E1014 14:49:41.061724 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.070634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.070865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.070956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.071064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.071166 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.116712 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:41 crc kubenswrapper[4860]: E1014 14:49:41.116868 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:41 crc kubenswrapper[4860]: E1014 14:49:41.116925 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:49:49.116907536 +0000 UTC m=+50.703690985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.173264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.173317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.173328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.173346 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.173358 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.276214 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.276261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.276273 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.276291 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.276303 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.378941 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.378976 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.378984 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.378999 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.379008 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.484916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.485378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.485389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.485405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.485415 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.587379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.587440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.587451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.587469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.587481 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.689251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.689315 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.689333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.689357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.689376 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.791600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.791644 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.791654 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.791669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.791682 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.893446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.893498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.893507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.893526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.893538 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.995972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.996304 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.996317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.996334 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:41 crc kubenswrapper[4860]: I1014 14:49:41.996345 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:41Z","lastTransitionTime":"2025-10-14T14:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.099374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.099450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.099459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.099473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.099485 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.201988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.202049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.202058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.202076 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.202087 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.304039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.304088 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.304099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.304117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.304130 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.406596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.406651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.406663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.406683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.406695 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.509070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.509121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.509134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.509155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.509171 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.612205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.612275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.612290 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.612314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.612331 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.714585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.714648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.714668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.714695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.714716 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.818786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.818868 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.818894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.818925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.818952 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.922230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.922293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.922311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.922335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:42 crc kubenswrapper[4860]: I1014 14:49:42.922352 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:42Z","lastTransitionTime":"2025-10-14T14:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.025949 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.025995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.026006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.026039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.026051 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.060998 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.061070 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.061113 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:43 crc kubenswrapper[4860]: E1014 14:49:43.061150 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.061196 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:43 crc kubenswrapper[4860]: E1014 14:49:43.061325 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:43 crc kubenswrapper[4860]: E1014 14:49:43.061545 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:43 crc kubenswrapper[4860]: E1014 14:49:43.061613 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.128617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.128663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.128672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.128692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.128701 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.233221 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.233273 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.233288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.233310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.233325 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.335663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.335800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.335813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.335832 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.335846 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.440934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.440995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.441013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.441057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.441075 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.543791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.543855 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.543868 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.543888 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.543901 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.645837 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.645889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.645898 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.645913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.645925 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.748489 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.748532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.748541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.748558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.748567 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.850850 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.851259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.851292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.851338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.851360 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.954338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.954377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.954386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.954400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:43 crc kubenswrapper[4860]: I1014 14:49:43.954411 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:43Z","lastTransitionTime":"2025-10-14T14:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.056799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.056842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.056851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.056869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.056880 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.159732 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.159789 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.159805 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.159829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.159880 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.262872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.262903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.262912 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.262925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.262934 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.365278 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.365326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.365338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.365357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.365371 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.467100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.467156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.467172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.467188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.467199 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.570002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.570078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.570090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.570109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.570124 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.672335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.672386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.672397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.672415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.672427 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.775853 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.775905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.775928 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.775963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.775981 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.878184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.878253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.878267 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.878287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.878299 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.981093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.981143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.981157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.981175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:44 crc kubenswrapper[4860]: I1014 14:49:44.981189 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:44Z","lastTransitionTime":"2025-10-14T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.061370 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.061438 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.061508 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.061575 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.061655 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.061712 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.061840 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.061919 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.083767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.083811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.083825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.083843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.083859 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.187086 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.187130 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.187139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.187154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.187163 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.289663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.289717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.289732 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.289748 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.289760 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.361474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.361508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.361518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.361533 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.361542 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.373554 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:45Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.377421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.377456 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.377467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.377485 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.377501 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.387701 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:45Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.391066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.391107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.391116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.391151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.391166 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.403007 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:45Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.406567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.406624 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.406638 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.406675 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.406688 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.418079 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:45Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.422588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.422645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.422656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.422670 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.422680 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.436083 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:45Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:45 crc kubenswrapper[4860]: E1014 14:49:45.436255 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.438899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.438946 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.438958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.438980 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.438993 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.542072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.542121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.542131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.542147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.542158 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.645417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.645466 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.645478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.645495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.645507 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.747396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.747444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.747454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.747467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.747477 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.849557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.849601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.849614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.849628 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.849639 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.952340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.952375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.952386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.952402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:45 crc kubenswrapper[4860]: I1014 14:49:45.952413 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:45Z","lastTransitionTime":"2025-10-14T14:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.054016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.054074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.054083 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.054095 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.054103 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.155783 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.155822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.155832 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.155848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.155858 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.258347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.258384 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.258395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.258408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.258417 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.360532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.360581 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.360590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.360605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.360623 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.462324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.462362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.462374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.462388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.462397 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.564365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.564422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.564438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.564462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.564478 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.667618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.667662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.667675 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.667693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.667706 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.770311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.770351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.770361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.770377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.770387 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.872248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.872345 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.872359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.872377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.872389 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.975528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.975596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.975609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.975626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:46 crc kubenswrapper[4860]: I1014 14:49:46.975637 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:46Z","lastTransitionTime":"2025-10-14T14:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.061348 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.061464 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.061391 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:47 crc kubenswrapper[4860]: E1014 14:49:47.061524 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.061546 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:47 crc kubenswrapper[4860]: E1014 14:49:47.061688 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:47 crc kubenswrapper[4860]: E1014 14:49:47.061814 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:47 crc kubenswrapper[4860]: E1014 14:49:47.061945 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.077141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.077403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.077470 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.077528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.077581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.179788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.179892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.179918 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.179947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.179967 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.282550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.282608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.282631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.282658 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.282679 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.384329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.384728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.384920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.385153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.385370 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.487872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.488175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.488336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.488475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.488609 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.591500 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.591532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.591539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.591551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.591560 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.693822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.693858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.693867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.693880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.693889 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.795362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.795389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.795440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.795459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.795469 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.897358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.897424 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.897435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.897448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.897456 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.999849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.999915 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.999929 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:47 crc kubenswrapper[4860]: I1014 14:49:47.999949 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:47.999964 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:47Z","lastTransitionTime":"2025-10-14T14:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.101892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.101972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.101982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.102020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.102058 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.204697 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.204756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.204774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.204797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.204816 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.307394 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.307455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.307481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.307504 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.307522 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.409954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.409985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.409998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.410015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.410045 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.512809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.512856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.512867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.512886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.512898 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.614954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.614995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.615005 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.615018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.615054 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.717979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.718040 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.718051 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.718067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.718078 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.820443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.820486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.820499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.820513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.820521 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.922496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.922531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.922538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.922550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:48 crc kubenswrapper[4860]: I1014 14:49:48.922558 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:48Z","lastTransitionTime":"2025-10-14T14:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.025399 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.025445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.025458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.025484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.025546 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.060969 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.061281 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.061145 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.061101 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:49 crc kubenswrapper[4860]: E1014 14:49:49.061517 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:49 crc kubenswrapper[4860]: E1014 14:49:49.061583 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:49 crc kubenswrapper[4860]: E1014 14:49:49.061707 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:49 crc kubenswrapper[4860]: E1014 14:49:49.061862 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.082607 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.098185 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.115920 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.128600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.128642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.128657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.128674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.128687 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.132139 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.143766 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.156307 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.171128 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.184839 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.193481 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:49 crc kubenswrapper[4860]: E1014 14:49:49.193613 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:49 crc kubenswrapper[4860]: E1014 14:49:49.193698 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:50:05.193675207 +0000 UTC m=+66.780458676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.195280 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.206090 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.217638 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.231126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.231165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.231176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.231192 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.231202 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.232114 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.246827 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.269662 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.282378 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.293932 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.306675 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.333856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.334155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.334368 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.334548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.334670 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.436939 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.436975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.436986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.437003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.437015 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.539231 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.539311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.539323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.539347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.539360 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.632382 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.641215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.641258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.641270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.641287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.641298 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.645693 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.646780 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.656493 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.668276 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.681345 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.697137 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.710974 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.722998 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.735354 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.743552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.743604 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.743617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.743633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.743969 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.744404 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.760209 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.777966 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.790907 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.801334 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.811623 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.824617 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.834704 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.845678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.845710 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.845718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.845731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.845740 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.846922 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:49Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.947830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.947859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.947866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.947879 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:49 crc kubenswrapper[4860]: I1014 14:49:49.947888 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:49Z","lastTransitionTime":"2025-10-14T14:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.050427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.050475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.050485 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.050497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.050506 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.153421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.153484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.153495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.153511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.153523 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.256280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.256345 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.256356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.256397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.256411 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.358332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.358370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.358397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.358411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.358420 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.461297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.461342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.461351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.461366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.461377 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.563701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.563735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.563744 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.563779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.563791 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.665895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.665949 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.665963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.665977 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.665986 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.768936 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.768985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.768998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.769018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.769058 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.871959 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.872080 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.872099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.872123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.872140 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.916120 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.916268 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.916331 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.916385 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.916434 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916575 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916656 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:50:22.916633163 +0000 UTC m=+84.503416652 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916741 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916785 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916784 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916806 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916848 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:50:22.916812507 +0000 UTC m=+84.503595956 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916871 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:50:22.916864308 +0000 UTC m=+84.503647757 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.916890 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:50:22.916881029 +0000 UTC m=+84.503664468 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.917385 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.917479 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.917563 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:50 crc kubenswrapper[4860]: E1014 14:49:50.917683 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:50:22.917669178 +0000 UTC m=+84.504452637 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.974925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.975016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.975085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.975110 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:50 crc kubenswrapper[4860]: I1014 14:49:50.975160 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:50Z","lastTransitionTime":"2025-10-14T14:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.061421 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.061505 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.061443 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.061569 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:51 crc kubenswrapper[4860]: E1014 14:49:51.061597 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:51 crc kubenswrapper[4860]: E1014 14:49:51.061690 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:51 crc kubenswrapper[4860]: E1014 14:49:51.061772 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:51 crc kubenswrapper[4860]: E1014 14:49:51.061837 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.077857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.077887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.077895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.077908 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.077917 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.179721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.179780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.179798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.179821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.179839 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.282083 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.282181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.282196 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.282216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.282231 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.385624 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.385768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.385788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.385814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.385876 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.488642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.488691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.488702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.488719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.488731 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.591589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.591629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.591640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.591657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.591667 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.694548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.694870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.695074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.695233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.695411 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.797934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.797971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.797982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.797999 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.798010 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.900106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.900157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.900169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.900188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:51 crc kubenswrapper[4860]: I1014 14:49:51.900200 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:51Z","lastTransitionTime":"2025-10-14T14:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.002308 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.002343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.002350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.002365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.002374 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.105527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.105566 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.105577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.105592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.105608 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.208354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.208400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.208412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.208428 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.208438 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.310618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.310664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.310675 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.310690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.310702 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.412972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.413077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.413100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.413129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.413155 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.515491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.515547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.515563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.515584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.515603 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.619080 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.619134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.619154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.619177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.619195 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.722670 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.722703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.722713 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.722729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.722740 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.825196 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.825248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.825256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.825272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.825286 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.927745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.927785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.927796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.927812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:52 crc kubenswrapper[4860]: I1014 14:49:52.927826 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:52Z","lastTransitionTime":"2025-10-14T14:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.030474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.030524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.030540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.030563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.030581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.060592 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.060902 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.061636 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:53 crc kubenswrapper[4860]: E1014 14:49:53.061687 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:53 crc kubenswrapper[4860]: E1014 14:49:53.061267 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.061588 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:53 crc kubenswrapper[4860]: E1014 14:49:53.062168 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:53 crc kubenswrapper[4860]: E1014 14:49:53.062433 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.067177 4860 scope.go:117] "RemoveContainer" containerID="32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.138856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.138905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.138916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.138934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.138947 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.241830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.242249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.242264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.242286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.242298 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.343987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.344020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.344065 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.344079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.344088 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.446319 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.446350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.446359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.446371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.446380 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.477989 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/1.log" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.479978 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.480999 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.497226 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.516081 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.531258 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.545470 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.548200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.548228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.548240 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.548255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.548267 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.558777 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.577706 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.590727 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.604607 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.614327 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.625782 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.636637 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.647794 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.650160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.650202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.650213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.650230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.650242 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.659976 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.684211 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.698122 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.710477 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.723015 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.744942 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:53Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.752295 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.752340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.752352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.752370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.752382 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.854378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.854412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.854420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.854432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.854441 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.956237 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.956701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.956795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.956865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:53 crc kubenswrapper[4860]: I1014 14:49:53.956921 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:53Z","lastTransitionTime":"2025-10-14T14:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.059126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.059171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.059182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.059199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.059212 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.160994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.161059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.161069 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.161112 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.161121 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.263394 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.263639 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.263799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.263905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.263985 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.366197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.366252 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.366262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.366281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.366291 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.468934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.468993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.469007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.469054 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.469072 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.484264 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/2.log" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.484891 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/1.log" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.487842 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96" exitCode=1 Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.488121 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.488240 4860 scope.go:117] "RemoveContainer" containerID="32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.489133 4860 scope.go:117] "RemoveContainer" containerID="45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96" Oct 14 14:49:54 crc kubenswrapper[4860]: E1014 14:49:54.489397 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.506157 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.522251 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.534014 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.548390 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.562548 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.572652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.572693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.572703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.572718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.572731 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.578348 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.592327 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.613844 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32312191e8aa60beccefc79d992ae6b547cdf831a52f39ca4b576839590c027f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:37Z\\\",\\\"message\\\":\\\"nshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 14:49:36.876549 6262 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1014 14:49:36.876828 6262 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.628159 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.642888 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.653128 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.675865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.675906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.675916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.675931 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.675943 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.689641 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.707172 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.721253 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.746288 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.766132 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.778439 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.778469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.778478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.778493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.778502 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.781098 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.793187 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:54Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.881105 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.881135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.881143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.881180 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.881191 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.983678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.983732 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.983743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.983759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:54 crc kubenswrapper[4860]: I1014 14:49:54.983769 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:54Z","lastTransitionTime":"2025-10-14T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.060993 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.061142 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.061263 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.061341 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.061400 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.061581 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.061708 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.061855 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.086084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.086128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.086141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.086162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.086177 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.188606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.188642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.188650 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.188665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.188675 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.290992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.291041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.291050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.291064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.291074 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.393281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.393349 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.393360 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.393375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.393385 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.493566 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/2.log" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.495050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.495085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.495094 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.495109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.495118 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.501536 4860 scope.go:117] "RemoveContainer" containerID="45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.501731 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.515570 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.525925 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.535908 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.551964 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.568706 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.580560 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.591514 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.597242 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.597281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.597290 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.597305 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.597315 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.604947 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.616072 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.628164 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.631598 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.631634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.631644 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.631662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.631672 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.674708 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.674805 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.679101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.679144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.679162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.679184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.679200 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.690217 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.691359 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.694261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.694292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.694300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.694314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.694323 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.702022 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.705309 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.708546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.708587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.708598 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.708615 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.708625 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.713242 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.719441 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.722255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.722295 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.722310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.722327 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.722340 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.724076 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.733780 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: E1014 14:49:55.733999 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.736240 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.736276 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.736286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.736302 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.736314 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.736855 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.750883 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.760449 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:55Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.838856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.838892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.838902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.838916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.838926 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.940964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.941004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.941015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.941046 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:55 crc kubenswrapper[4860]: I1014 14:49:55.941060 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:55Z","lastTransitionTime":"2025-10-14T14:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.043390 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.043425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.043434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.043447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.043457 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.145755 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.145791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.145801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.145817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.145828 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.247577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.247622 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.247632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.247647 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.247658 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.350524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.350554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.350584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.350603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.350613 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.453261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.453359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.453371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.453385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.453393 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.555947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.555997 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.556009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.556041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.556069 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.658262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.658315 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.658324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.658363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.658383 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.760004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.760067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.760078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.760092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.760101 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.862772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.862805 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.862813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.862825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.862832 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.965484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.965519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.965529 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.965543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:56 crc kubenswrapper[4860]: I1014 14:49:56.965553 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:56Z","lastTransitionTime":"2025-10-14T14:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.060759 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.060806 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.060865 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.060778 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:57 crc kubenswrapper[4860]: E1014 14:49:57.060872 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:57 crc kubenswrapper[4860]: E1014 14:49:57.061055 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:57 crc kubenswrapper[4860]: E1014 14:49:57.061072 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:57 crc kubenswrapper[4860]: E1014 14:49:57.061117 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.067001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.067037 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.067046 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.067058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.067066 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.169687 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.169738 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.169752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.169768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.169777 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.272021 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.272111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.272122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.272136 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.272148 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.374450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.374495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.374503 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.374518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.374528 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.476506 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.476555 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.476564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.476580 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.476589 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.578213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.578244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.578252 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.578264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.578272 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.680665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.680705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.680714 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.680729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.680737 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.782681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.782717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.782725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.782739 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.782749 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.884592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.884660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.884674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.884689 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.884698 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.986441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.986488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.986505 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.986524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:57 crc kubenswrapper[4860]: I1014 14:49:57.986536 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:57Z","lastTransitionTime":"2025-10-14T14:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.089045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.089088 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.089099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.089116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.089136 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.191758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.191794 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.191802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.191815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.191824 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.293629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.293659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.293666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.293680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.293690 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.396622 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.396649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.396657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.396670 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.396678 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.499070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.499101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.499111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.499123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.499131 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.601399 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.601435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.601444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.601458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.601469 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.704943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.704983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.705130 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.705326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.705339 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.808285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.808333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.808344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.808359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.808369 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.909938 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.909986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.910002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.910022 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:58 crc kubenswrapper[4860]: I1014 14:49:58.910073 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:58Z","lastTransitionTime":"2025-10-14T14:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.012582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.012617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.012626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.012642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.012654 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.061384 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:49:59 crc kubenswrapper[4860]: E1014 14:49:59.061835 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.061431 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:49:59 crc kubenswrapper[4860]: E1014 14:49:59.061914 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.061540 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:49:59 crc kubenswrapper[4860]: E1014 14:49:59.061969 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.061384 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:49:59 crc kubenswrapper[4860]: E1014 14:49:59.062078 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.075852 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.087942 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.097866 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.108662 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.115015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.115066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.115076 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.115093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.115103 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.121504 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.131129 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.144480 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.159881 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.169763 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.186925 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.208969 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.217212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.217242 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.217250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.217263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.217276 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.221568 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.232329 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.242880 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.252534 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.263714 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.273633 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.282955 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:49:59Z is after 2025-08-24T17:21:41Z" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.320062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.320090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.320099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.320114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.320124 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.422787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.422822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.422833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.422849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.422860 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.525320 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.525350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.525360 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.525374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.525382 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.635474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.635521 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.635533 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.635551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.635564 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.738577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.739093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.739101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.739113 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.739123 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.841141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.841173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.841184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.841197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.841207 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.944264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.944303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.944311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.944326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:49:59 crc kubenswrapper[4860]: I1014 14:49:59.944337 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:49:59Z","lastTransitionTime":"2025-10-14T14:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.049648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.049678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.049688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.049702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.049712 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.152383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.152426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.152437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.152456 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.152468 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.254417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.254447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.254455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.254467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.254476 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.356876 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.356911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.356920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.356933 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.356944 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.459204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.459270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.459284 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.459307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.459323 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.562096 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.562175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.562198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.562238 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.562259 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.664887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.664939 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.664948 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.664961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.664970 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.767474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.767594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.767610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.767657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.767673 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.869908 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.869941 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.869969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.869984 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.869993 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.972386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.972423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.972432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.972446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:00 crc kubenswrapper[4860]: I1014 14:50:00.972455 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:00Z","lastTransitionTime":"2025-10-14T14:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.061156 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.061233 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.061288 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:01 crc kubenswrapper[4860]: E1014 14:50:01.061293 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.061306 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:01 crc kubenswrapper[4860]: E1014 14:50:01.061378 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:01 crc kubenswrapper[4860]: E1014 14:50:01.061458 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:01 crc kubenswrapper[4860]: E1014 14:50:01.061530 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.074333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.074363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.074371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.074385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.074393 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.177114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.177153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.177163 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.177177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.177185 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.279241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.279275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.279285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.279299 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.279309 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.381399 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.381435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.381446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.381460 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.381470 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.484317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.484350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.484360 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.484374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.484384 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.586789 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.586816 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.586823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.586836 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.586847 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.688958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.689003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.689016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.689047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.689060 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.791279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.791310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.791321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.791336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.791348 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.893602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.893632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.893643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.893659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.893669 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.995468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.995497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.995507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.995522 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:01 crc kubenswrapper[4860]: I1014 14:50:01.995532 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:01Z","lastTransitionTime":"2025-10-14T14:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.097824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.098081 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.098175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.098248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.098343 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.201109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.201864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.201994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.202120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.202221 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.304540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.304570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.304578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.304591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.304599 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.407212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.407251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.407263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.407278 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.407288 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.509004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.509062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.509072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.509086 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.509104 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.611092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.611123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.611131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.611144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.611154 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.713167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.713363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.713431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.713551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.713633 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.815571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.815611 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.815621 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.815634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.815645 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.917955 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.917994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.918005 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.918021 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:02 crc kubenswrapper[4860]: I1014 14:50:02.918075 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:02Z","lastTransitionTime":"2025-10-14T14:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.024772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.024928 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.024943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.024968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.024984 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.061349 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.061475 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:03 crc kubenswrapper[4860]: E1014 14:50:03.061518 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.061556 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:03 crc kubenswrapper[4860]: E1014 14:50:03.061702 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:03 crc kubenswrapper[4860]: E1014 14:50:03.061774 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.062014 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:03 crc kubenswrapper[4860]: E1014 14:50:03.062256 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.127440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.127470 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.127477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.127489 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.127498 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.229772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.229824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.229834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.229849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.229859 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.332114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.332156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.332164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.332178 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.332188 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.434574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.434618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.434627 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.434644 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.434657 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.536591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.536655 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.536663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.536677 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.536687 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.638324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.638353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.638362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.638376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.638386 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.740595 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.740633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.740641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.740658 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.740666 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.844296 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.844385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.844418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.844455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.844478 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.947444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.947483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.947494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.947509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:03 crc kubenswrapper[4860]: I1014 14:50:03.947520 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:03Z","lastTransitionTime":"2025-10-14T14:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.049335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.049378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.049389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.049404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.049415 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.150737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.150779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.150790 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.150807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.150819 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.252721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.253271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.253356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.253436 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.253499 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.355478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.355515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.355524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.355539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.355548 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.457823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.457851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.457858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.457871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.457879 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.560466 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.560503 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.560514 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.560529 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.560543 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.662328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.662375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.662385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.662401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.662413 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.764344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.764398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.764411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.764427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.764438 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.866587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.866633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.866645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.866663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.866675 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.969232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.969523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.969587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.969658 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:04 crc kubenswrapper[4860]: I1014 14:50:04.969720 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:04Z","lastTransitionTime":"2025-10-14T14:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.061514 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.061572 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.061613 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.061807 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.061799 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.061861 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.061885 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.062018 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.071845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.071877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.071888 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.071897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.071905 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.174111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.174135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.174143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.174156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.174165 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.257746 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.257875 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.257936 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:50:37.257918933 +0000 UTC m=+98.844702382 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.276318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.276355 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.276365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.276382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.276394 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.378783 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.378825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.378835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.378851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.378861 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.480848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.480880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.480889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.480903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.480911 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.583205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.583262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.583273 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.583289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.583301 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.685695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.685727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.685736 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.685750 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.685759 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.787747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.787785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.787801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.787815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.787826 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.890324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.890639 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.890757 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.890861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.890936 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.954498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.954552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.954563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.954584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.954596 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.967551 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:05Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.971147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.971191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.971203 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.971221 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.971234 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:05 crc kubenswrapper[4860]: E1014 14:50:05.987350 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:05Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.994843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.995250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.995263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.995280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:05 crc kubenswrapper[4860]: I1014 14:50:05.995312 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:05Z","lastTransitionTime":"2025-10-14T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: E1014 14:50:06.008758 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:06Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.012257 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.012307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.012318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.012335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.012349 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: E1014 14:50:06.026696 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:06Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.031204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.031424 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.031558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.031707 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.031898 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: E1014 14:50:06.051108 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:06Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:06 crc kubenswrapper[4860]: E1014 14:50:06.051378 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.052895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.052932 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.052941 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.052959 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.052969 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.155067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.155121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.155133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.155148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.155158 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.256702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.256740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.256751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.256768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.256780 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.358540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.358575 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.358583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.358596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.358604 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.460772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.460816 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.460827 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.460843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.460853 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.562850 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.562887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.562898 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.562913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.562922 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.664801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.664825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.664838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.664873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.664887 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.766791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.766830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.766839 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.766854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.766863 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.869521 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.869770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.869820 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.869845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.869861 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.971958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.972043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.972055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.972072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:06 crc kubenswrapper[4860]: I1014 14:50:06.972083 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:06Z","lastTransitionTime":"2025-10-14T14:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.060649 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.060751 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.060685 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.060685 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:07 crc kubenswrapper[4860]: E1014 14:50:07.060943 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:07 crc kubenswrapper[4860]: E1014 14:50:07.061066 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:07 crc kubenswrapper[4860]: E1014 14:50:07.061128 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:07 crc kubenswrapper[4860]: E1014 14:50:07.061271 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.083174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.083211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.083222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.083240 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.083252 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.185129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.185161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.185172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.185187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.185198 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.287213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.287250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.287260 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.287277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.287285 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.389429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.389470 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.389479 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.389493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.389517 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.491740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.491772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.491781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.491793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.491805 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.593536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.593577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.593588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.593604 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.593615 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.695383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.695418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.695429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.695445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.695456 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.797383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.797408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.797416 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.797429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.797439 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.899318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.899361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.899370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.899384 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:07 crc kubenswrapper[4860]: I1014 14:50:07.899394 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:07Z","lastTransitionTime":"2025-10-14T14:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.002211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.002249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.002259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.002277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.002298 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.061621 4860 scope.go:117] "RemoveContainer" containerID="45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96" Oct 14 14:50:08 crc kubenswrapper[4860]: E1014 14:50:08.062072 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.104173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.104425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.104513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.104604 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.104684 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.207632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.207694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.207705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.207721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.207733 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.310379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.310413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.310421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.310435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.310444 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.413109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.413147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.413158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.413176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.413188 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.516182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.516220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.516233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.516251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.516266 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.618970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.619018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.619056 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.619073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.619086 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.721542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.721587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.721598 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.721613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.721622 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.824359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.824394 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.824404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.824419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.824429 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.926686 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.926731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.926743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.926759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:08 crc kubenswrapper[4860]: I1014 14:50:08.926769 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:08Z","lastTransitionTime":"2025-10-14T14:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.029395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.029443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.029451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.029464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.029473 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.061384 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.061423 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:09 crc kubenswrapper[4860]: E1014 14:50:09.061522 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.061584 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.061622 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:09 crc kubenswrapper[4860]: E1014 14:50:09.061736 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:09 crc kubenswrapper[4860]: E1014 14:50:09.061818 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:09 crc kubenswrapper[4860]: E1014 14:50:09.061884 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.074506 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.087702 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.098322 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.109786 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.122906 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.131254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.131281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.131290 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.131306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.131316 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.133198 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.145407 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.156759 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.166330 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.182083 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.198893 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.210237 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.220425 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.229852 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.233336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.233371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.233378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.233392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.233401 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.241921 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.254139 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.263953 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.274240 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:09Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.335864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.335900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.335909 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.335922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.335930 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.438300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.438328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.438337 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.438366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.438375 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.540723 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.540761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.540772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.540787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.540799 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.644111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.644138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.644146 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.644160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.644169 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.746418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.746475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.746484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.746496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.746504 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.848504 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.848538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.848548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.848564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.848575 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.950740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.950782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.950792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.950809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:09 crc kubenswrapper[4860]: I1014 14:50:09.950822 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:09Z","lastTransitionTime":"2025-10-14T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.052354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.052381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.052388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.052402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.052410 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.154659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.154692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.154702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.154718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.154735 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.256699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.256761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.256772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.256788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.256799 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.358890 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.358934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.358944 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.358962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.358975 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.461180 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.461727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.461846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.461943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.462063 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.564480 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.564518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.564526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.564539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.564548 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.666413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.666679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.666796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.666934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.667172 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.770050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.770093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.770110 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.770126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.770137 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.872456 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.872744 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.872755 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.872770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.872781 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.974704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.974745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.974756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.974773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:10 crc kubenswrapper[4860]: I1014 14:50:10.974785 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:10Z","lastTransitionTime":"2025-10-14T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.060839 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.060888 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.060916 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:11 crc kubenswrapper[4860]: E1014 14:50:11.060957 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:11 crc kubenswrapper[4860]: E1014 14:50:11.061080 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:11 crc kubenswrapper[4860]: E1014 14:50:11.061209 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.061280 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:11 crc kubenswrapper[4860]: E1014 14:50:11.061346 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.076710 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.076956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.077041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.077117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.077211 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.179346 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.179396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.179407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.179420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.179428 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.281982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.282007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.282015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.282048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.282058 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.384501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.384548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.384560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.384577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.384591 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.486827 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.487351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.487424 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.487493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.487552 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.540734 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/0.log" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.541016 4860 generic.go:334] "Generic (PLEG): container finished" podID="ceb09eae-57c9-4a8e-95d5-aa40e49f7316" containerID="854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f" exitCode=1 Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.541077 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerDied","Data":"854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.541591 4860 scope.go:117] "RemoveContainer" containerID="854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.555435 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.568076 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.583346 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.591545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.591588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.591600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.591615 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.591628 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.602906 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.617642 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.631513 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.644268 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.655546 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.669192 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.686872 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.694079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.694100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.694108 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.694120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.694128 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.706847 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.722561 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.733475 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.743376 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.753825 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.765751 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.774530 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.785369 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:11Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.795900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.795919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.795927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.795938 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.795946 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.897947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.897983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.897991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.898004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:11 crc kubenswrapper[4860]: I1014 14:50:11.898014 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:11Z","lastTransitionTime":"2025-10-14T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.000931 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.000957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.000966 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.000978 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.000987 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.103478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.103509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.103519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.103532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.103544 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.205692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.205729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.205740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.205755 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.205765 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.307684 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.307975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.308070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.308135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.308197 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.409807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.409844 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.409854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.409869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.409880 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.512078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.512113 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.512123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.512138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.512147 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.544745 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/0.log" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.544790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerStarted","Data":"4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.562361 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.574809 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.586052 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.597888 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.609082 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.613580 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.613610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.613621 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.613637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.613650 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.622525 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.633294 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.643921 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.656641 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.666335 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.677096 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.687912 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.698432 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.712658 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.715488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.715524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.715536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.715553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.715564 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.731281 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.747291 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.758206 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.766917 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:12Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.818464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.818503 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.818511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.818525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.818534 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.929381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.929432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.929443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.929459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:12 crc kubenswrapper[4860]: I1014 14:50:12.929471 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:12Z","lastTransitionTime":"2025-10-14T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.032102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.032137 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.032146 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.032160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.032169 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.060887 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:13 crc kubenswrapper[4860]: E1014 14:50:13.061355 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.061558 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:13 crc kubenswrapper[4860]: E1014 14:50:13.061790 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.062211 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:13 crc kubenswrapper[4860]: E1014 14:50:13.062377 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.062663 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:13 crc kubenswrapper[4860]: E1014 14:50:13.062837 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.134442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.134721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.134825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.134929 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.135038 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.237371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.237410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.237418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.237432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.237441 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.340265 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.340301 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.340309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.340322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.340331 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.442995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.443309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.443448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.443538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.443623 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.545994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.546671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.546749 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.546858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.546955 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.649560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.649608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.649617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.649633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.649641 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.752085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.752345 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.752468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.752566 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.752664 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.854945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.854985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.854993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.855009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.855019 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.958188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.958229 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.958259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.958276 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:13 crc kubenswrapper[4860]: I1014 14:50:13.958286 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:13Z","lastTransitionTime":"2025-10-14T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.060401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.060467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.060479 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.060497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.060510 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.162576 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.162833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.162901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.163002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.163098 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.265186 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.265239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.265249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.265263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.265274 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.367778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.367819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.367830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.367845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.367858 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.470497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.470843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.470932 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.471008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.471114 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.573444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.573490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.573501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.573517 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.573529 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.675950 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.675977 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.675985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.675998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.676006 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.778134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.778176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.778187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.778202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.778211 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.880141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.880170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.880179 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.880191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.880201 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.981797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.981848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.981859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.981874 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:14 crc kubenswrapper[4860]: I1014 14:50:14.981887 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:14Z","lastTransitionTime":"2025-10-14T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.060794 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.060858 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.060914 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:15 crc kubenswrapper[4860]: E1014 14:50:15.060917 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.060863 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:15 crc kubenswrapper[4860]: E1014 14:50:15.061069 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:15 crc kubenswrapper[4860]: E1014 14:50:15.061130 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:15 crc kubenswrapper[4860]: E1014 14:50:15.061189 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.083677 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.083718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.083726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.083740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.083749 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.186154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.186187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.186196 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.186208 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.186217 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.288395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.288437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.288450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.288465 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.288473 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.390675 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.390702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.390711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.390726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.390735 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.493126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.493181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.493194 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.493211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.493227 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.595535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.595571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.595578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.595592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.595603 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.697577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.697618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.697629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.697646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.697658 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.799885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.799928 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.799939 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.799952 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.799962 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.902824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.902869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.902882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.902902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:15 crc kubenswrapper[4860]: I1014 14:50:15.902915 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:15Z","lastTransitionTime":"2025-10-14T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.005188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.005234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.005244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.005262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.005274 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.107649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.107680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.107689 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.107703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.107713 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.190843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.190875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.190883 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.190896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.190905 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: E1014 14:50:16.202320 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:16Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.206151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.206188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.206198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.206215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.206227 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: E1014 14:50:16.218872 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:16Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.223280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.223319 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.223334 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.223355 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.223368 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: E1014 14:50:16.236764 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:16Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.239946 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.239984 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.239996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.240010 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.240021 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: E1014 14:50:16.251568 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:16Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.254901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.254933 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.254943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.254959 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.254970 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: E1014 14:50:16.266391 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:16Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:16 crc kubenswrapper[4860]: E1014 14:50:16.266512 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.267945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.267983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.267992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.268006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.268015 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.370152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.370194 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.370206 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.370221 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.370234 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.472523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.472589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.472601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.472619 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.472631 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.574945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.575087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.575104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.575120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.575131 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.677577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.677645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.677655 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.677671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.677684 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.779314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.779351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.779362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.779378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.779390 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.881999 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.882061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.882072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.882089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.882101 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.984597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.984639 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.984649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.984663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:16 crc kubenswrapper[4860]: I1014 14:50:16.984674 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:16Z","lastTransitionTime":"2025-10-14T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.060569 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.060569 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:17 crc kubenswrapper[4860]: E1014 14:50:17.060705 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.060595 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.060850 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:17 crc kubenswrapper[4860]: E1014 14:50:17.060884 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:17 crc kubenswrapper[4860]: E1014 14:50:17.061007 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:17 crc kubenswrapper[4860]: E1014 14:50:17.061088 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.071178 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.087181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.087218 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.087226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.087238 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.087248 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.190402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.190434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.190442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.190454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.190462 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.292975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.293016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.293038 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.293053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.293062 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.395346 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.395377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.395386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.395400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.395413 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.497962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.498006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.498017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.498063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.498076 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.602810 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.602859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.602873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.602890 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.602908 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.705246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.705277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.705285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.705298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.705307 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.807969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.807993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.808000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.808012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.808020 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.910187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.910220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.910227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.910241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:17 crc kubenswrapper[4860]: I1014 14:50:17.910249 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:17Z","lastTransitionTime":"2025-10-14T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.012923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.012961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.012972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.013012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.013057 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.115903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.115988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.116007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.116064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.116080 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.219330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.219485 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.219505 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.219532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.219549 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.323924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.324167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.324197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.324219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.324236 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.426965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.426995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.427003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.427015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.427023 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.529745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.529800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.529813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.529830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.529841 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.633016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.633105 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.633123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.633148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.633166 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.735902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.735943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.735956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.735974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.735988 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.838187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.838224 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.838239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.838253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.838262 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.940071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.940108 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.940115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.940129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:18 crc kubenswrapper[4860]: I1014 14:50:18.940138 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:18Z","lastTransitionTime":"2025-10-14T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.042102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.042133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.042141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.042155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.042164 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.061073 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.061126 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.061088 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:19 crc kubenswrapper[4860]: E1014 14:50:19.061238 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:19 crc kubenswrapper[4860]: E1014 14:50:19.061359 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.061416 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:19 crc kubenswrapper[4860]: E1014 14:50:19.061469 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:19 crc kubenswrapper[4860]: E1014 14:50:19.061511 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.078304 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.090172 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.104561 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.116512 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.127797 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.140891 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.144629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.144659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.144667 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.144683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.144693 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.155221 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.170378 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.187635 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.199662 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.210966 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.223486 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.239123 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.246866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.246897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.246905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.246919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.246928 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.249020 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.261058 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0840e458-6b35-4dcd-a8ca-57479a256d75\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14cf97a4526994bafc923e20f51157fe84ec6690b3bba1f2210a43105a2ce6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.273387 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.284242 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.294800 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.313101 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:19Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.348867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.348925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.348933 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.348947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.348956 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.450636 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.450895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.450981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.451087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.451167 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.553993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.554022 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.554055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.554067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.554075 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.655897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.656188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.656262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.656335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.656412 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.758971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.759178 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.759204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.759222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.759234 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.861684 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.861724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.861735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.861751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.861762 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.964233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.964272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.964281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.964296 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:19 crc kubenswrapper[4860]: I1014 14:50:19.964307 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:19Z","lastTransitionTime":"2025-10-14T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.066931 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.066961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.066969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.066980 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.066988 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.168669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.168708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.168716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.168728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.168738 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.270690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.270724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.270733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.270765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.270776 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.373500 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.373814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.373822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.373835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.373845 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.476454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.476513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.476522 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.476535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.476544 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.579244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.579311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.579323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.579339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.579352 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.681663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.681714 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.681725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.681763 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.681776 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.784431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.784482 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.784494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.784511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.784522 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.886589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.886628 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.886636 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.886648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.886660 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.989047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.989337 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.989423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.989519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:20 crc kubenswrapper[4860]: I1014 14:50:20.989594 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:20Z","lastTransitionTime":"2025-10-14T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.061320 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:21 crc kubenswrapper[4860]: E1014 14:50:21.061750 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.061951 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:21 crc kubenswrapper[4860]: E1014 14:50:21.062119 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.062495 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:21 crc kubenswrapper[4860]: E1014 14:50:21.062640 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.067209 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:21 crc kubenswrapper[4860]: E1014 14:50:21.068349 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.091477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.091758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.091853 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.091946 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.092039 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.194733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.194766 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.194774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.194787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.194796 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.297396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.297715 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.297851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.297961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.298099 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.400060 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.400300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.400406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.400496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.400584 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.502983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.503258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.503337 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.503415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.503486 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.605788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.606115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.606222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.606380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.606466 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.709009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.709729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.709811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.709872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.709928 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.812597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.812623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.812633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.812646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.812655 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.914904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.914966 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.914978 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.914991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:21 crc kubenswrapper[4860]: I1014 14:50:21.915001 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:21Z","lastTransitionTime":"2025-10-14T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.017107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.017148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.017159 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.017179 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.017192 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.061562 4860 scope.go:117] "RemoveContainer" containerID="45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.118972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.119012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.119044 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.119059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.119094 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.220800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.220835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.220846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.220861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.220873 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.323104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.323147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.323158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.323173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.323183 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.425099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.425138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.425149 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.425164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.425173 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.527488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.527523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.527530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.527543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.527551 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.572557 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/2.log" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.574340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.574624 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.586664 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0840e458-6b35-4dcd-a8ca-57479a256d75\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14cf97a4526994bafc923e20f51157fe84ec6690b3bba1f2210a43105a2ce6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.597533 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.607272 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.616872 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.629307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.629340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.629348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.629362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.629372 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.633286 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.650402 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.663682 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.673795 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.684312 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.693801 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.708457 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.718771 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.728242 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.731556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.731582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.731592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.731605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.731612 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.745001 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.763716 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.774692 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.784492 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.797783 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.809440 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:22Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.834124 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.834164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.834175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.834189 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.834199 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.936797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.936840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.936851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.936869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.936881 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:22Z","lastTransitionTime":"2025-10-14T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.939232 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.939316 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.939345 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.939367 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:22 crc kubenswrapper[4860]: I1014 14:50:22.939420 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939523 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939539 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939549 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939588 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.93957651 +0000 UTC m=+148.526359959 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939706 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.939699683 +0000 UTC m=+148.526483132 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939736 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939755 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.939749694 +0000 UTC m=+148.526533143 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939854 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939875 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.939870027 +0000 UTC m=+148.526653476 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939970 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939982 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.939989 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:50:22 crc kubenswrapper[4860]: E1014 14:50:22.940010 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.94000459 +0000 UTC m=+148.526788039 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.040053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.040097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.040113 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.040134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.040149 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.061208 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.061220 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.061279 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:23 crc kubenswrapper[4860]: E1014 14:50:23.061426 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.061630 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:23 crc kubenswrapper[4860]: E1014 14:50:23.061697 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:23 crc kubenswrapper[4860]: E1014 14:50:23.061841 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:23 crc kubenswrapper[4860]: E1014 14:50:23.062003 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.142104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.142143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.142152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.142166 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.142175 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.244824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.244865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.244873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.244887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.244896 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.347202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.347241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.347249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.347265 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.347276 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.449912 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.449977 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.449991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.450010 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.450078 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.552331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.552363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.552376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.552392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.552404 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.578847 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/3.log" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.579394 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/2.log" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.581984 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" exitCode=1 Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.582021 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.582089 4860 scope.go:117] "RemoveContainer" containerID="45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.582750 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 14:50:23 crc kubenswrapper[4860]: E1014 14:50:23.582977 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.596228 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0840e458-6b35-4dcd-a8ca-57479a256d75\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14cf97a4526994bafc923e20f51157fe84ec6690b3bba1f2210a43105a2ce6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.608108 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.617677 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.626046 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.641957 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a70e1f83b6f607de8989d5bf9a85452f870694bc215cabae224aea52c9cc96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:49:53Z\\\",\\\"message\\\":\\\"4:49:53.900879 6450 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 14:49:53.900894 6450 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 14:49:53.900899 6450 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 14:49:53.901074 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 14:49:53.901101 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 14:49:53.901106 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 14:49:53.901116 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 14:49:53.901120 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 14:49:53.901160 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 14:49:53.901169 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 14:49:53.901177 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 14:49:53.901182 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 14:49:53.901201 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 14:49:53.901222 6450 factory.go:656] Stopping watch factory\\\\nI1014 14:49:53.901251 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 14:49:53.901265 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1014 14:49:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:23Z\\\",\\\"message\\\":\\\"50:22.932667 6874 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1014 14:50:22.932675 6874 obj_retry.go:409] Going to retry *v1.Pod resource setup for 10 objects: [openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn openshift-kube-apiserver/kube-apiserver-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-network-diagnostics/network-check-target-xd92c openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-multus/network-metrics-daemon-vtscw]\\\\nI1014 14:50:22.932688 6874 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 14:50:22.932702 6874 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-vtscw before timer (time: 2025-10-14 14:50:23.968863246 +0000 UTC m=+1.572788099): skip\\\\nI1014 14:50:22.932717 6874 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 14:50:22.932757 6874 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.654594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.654640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.654649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.654662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.654672 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.658951 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.670815 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.681625 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.691096 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.702539 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.714328 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.723212 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.732966 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.744306 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.754987 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.756389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.756421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.756431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.756445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.756454 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.767462 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.778509 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.793067 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.803940 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:23Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.858626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.858661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.858671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.858685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.858696 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.960920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.960961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.960969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.960985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:23 crc kubenswrapper[4860]: I1014 14:50:23.960994 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:23Z","lastTransitionTime":"2025-10-14T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.063081 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.063129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.063140 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.063155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.063167 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.165432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.165478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.165489 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.165505 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.165517 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.268121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.268183 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.268200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.268221 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.268236 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.370321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.370351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.370361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.370376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.370385 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.472157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.472190 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.472200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.472213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.472223 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.574706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.574752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.574760 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.574775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.574784 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.586423 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/3.log" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.590184 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 14:50:24 crc kubenswrapper[4860]: E1014 14:50:24.590327 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.601882 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.612322 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.624617 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.636629 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.649591 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.659849 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.668349 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0840e458-6b35-4dcd-a8ca-57479a256d75\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14cf97a4526994bafc923e20f51157fe84ec6690b3bba1f2210a43105a2ce6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.676714 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.676753 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.676762 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.676777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.676785 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.680673 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.690470 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.702760 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.719546 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:23Z\\\",\\\"message\\\":\\\"50:22.932667 6874 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1014 14:50:22.932675 6874 obj_retry.go:409] Going to retry *v1.Pod resource setup for 10 objects: [openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn openshift-kube-apiserver/kube-apiserver-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-network-diagnostics/network-check-target-xd92c openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-multus/network-metrics-daemon-vtscw]\\\\nI1014 14:50:22.932688 6874 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 14:50:22.932702 6874 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-vtscw before timer (time: 2025-10-14 14:50:23.968863246 +0000 UTC m=+1.572788099): skip\\\\nI1014 14:50:22.932717 6874 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 14:50:22.932757 6874 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:50:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.738772 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.750862 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.761505 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.772291 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.778685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.778712 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.778721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.778734 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.778745 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.783424 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.795476 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.804991 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.813931 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:24Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.880672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.880706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.880715 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.880730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.880740 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.982907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.982975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.982989 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.983002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:24 crc kubenswrapper[4860]: I1014 14:50:24.983012 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:24Z","lastTransitionTime":"2025-10-14T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.060746 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:25 crc kubenswrapper[4860]: E1014 14:50:25.060877 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.061099 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:25 crc kubenswrapper[4860]: E1014 14:50:25.061193 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.061102 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.061225 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:25 crc kubenswrapper[4860]: E1014 14:50:25.061294 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:25 crc kubenswrapper[4860]: E1014 14:50:25.061429 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.085688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.085725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.085741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.085762 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.085777 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.188049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.188086 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.188097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.188116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.188129 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.290241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.290268 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.290275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.290290 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.290298 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.392342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.392375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.392390 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.392406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.392416 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.494627 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.494663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.494672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.494685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.494695 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.596708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.596742 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.596750 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.596763 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.596772 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.699497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.699533 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.699543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.699558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.699570 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.801292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.801348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.801365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.801388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.801404 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.904477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.904535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.904557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.904585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:25 crc kubenswrapper[4860]: I1014 14:50:25.904656 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:25Z","lastTransitionTime":"2025-10-14T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.007690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.007753 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.007771 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.007798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.007815 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.110955 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.111006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.111018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.111063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.111073 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.213469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.213520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.213592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.213609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.213675 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.316602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.316646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.316662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.316681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.316691 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.419477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.419513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.419525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.419537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.419546 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.500944 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.500982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.500991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.501008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.501020 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: E1014 14:50:26.513257 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:26Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.516838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.516867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.516875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.516891 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.516901 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: E1014 14:50:26.527821 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:26Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.531147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.531171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.531180 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.531197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.531207 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: E1014 14:50:26.543850 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:26Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.547952 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.547989 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.547998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.548012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.548021 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: E1014 14:50:26.563072 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:26Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.566232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.566269 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.566277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.566292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.566302 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: E1014 14:50:26.578160 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6ed96bb-defa-436f-8418-5c94eee7820a\\\",\\\"systemUUID\\\":\\\"f3673689-c436-4678-b4d3-79881aec5944\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:26Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:26 crc kubenswrapper[4860]: E1014 14:50:26.578292 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.579583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.579607 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.579616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.579628 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.579637 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.681423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.681465 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.681476 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.681493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.681919 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.783986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.784057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.784071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.784090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.784102 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.886348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.886385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.886398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.886412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.886421 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.988886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.988948 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.988966 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.988991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:26 crc kubenswrapper[4860]: I1014 14:50:26.989008 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:26Z","lastTransitionTime":"2025-10-14T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.061319 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.061352 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.061361 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.061380 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:27 crc kubenswrapper[4860]: E1014 14:50:27.061461 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:27 crc kubenswrapper[4860]: E1014 14:50:27.061553 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:27 crc kubenswrapper[4860]: E1014 14:50:27.061612 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:27 crc kubenswrapper[4860]: E1014 14:50:27.061659 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.091125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.091158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.091169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.091184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.091195 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.193560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.193620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.193649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.193663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.193675 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.295554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.295617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.295635 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.295662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.295682 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.397486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.397525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.397534 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.397547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.397556 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.500731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.500773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.500783 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.500799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.500808 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.603809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.603853 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.603867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.603882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.603891 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.706370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.706708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.706787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.706805 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.706818 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.808725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.808788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.808805 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.808828 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.808844 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.910600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.910642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.910653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.910671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:27 crc kubenswrapper[4860]: I1014 14:50:27.910680 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:27Z","lastTransitionTime":"2025-10-14T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.013271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.013308 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.013317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.013330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.013340 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.116125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.116184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.116201 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.116223 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.116239 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.218574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.218633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.218650 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.218673 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.218691 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.320906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.320937 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.320947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.320961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.320970 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.423457 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.423532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.423552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.423578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.423622 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.526141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.526212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.526235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.526263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.526283 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.628792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.628864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.628889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.628916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.628936 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.732289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.732331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.732343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.732359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.732369 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.834762 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.834798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.834808 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.834822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.834830 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.937941 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.938076 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.938098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.938121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:28 crc kubenswrapper[4860]: I1014 14:50:28.938138 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:28Z","lastTransitionTime":"2025-10-14T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.041193 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.041238 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.041250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.041268 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.041281 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.061378 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:29 crc kubenswrapper[4860]: E1014 14:50:29.061513 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.061732 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:29 crc kubenswrapper[4860]: E1014 14:50:29.061819 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.061965 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:29 crc kubenswrapper[4860]: E1014 14:50:29.062070 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.062215 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:29 crc kubenswrapper[4860]: E1014 14:50:29.062274 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.080737 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71e62ff-6efd-4d0e-80b0-c988796836a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3453fcf3b3874b2b59af674d5bc2c6d806b1431e65aefbed34bf5dbc26a945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba1e959f7ea47716c4a292675af40550a65b87c5ce2c6e2bc9d7579997382a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b65bb07a7c9a756a34b9f485c8521029672018515e93eef3f557db38a56c428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bffebb1e5a6a6a52a871a90ac2febfa135ce1b9d8272c68fe5babe902b72520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.097179 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.115748 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2thzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05162975-38db-40bf-9eb5-4d9bc165cb83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92410fb9158258fae94afb1d7b35d903c0bd96a71ae272f72aca668ecbb70242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcd9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2thzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.135411 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd2cd739-fe15-4cc1-881e-a20faa721bb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0208779a6dae980c79b5f33bd8cf41989dbef977b1d94712fea636f0572e472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4013dbc905d4ff6e6f9d50f289834e4e588f223c80b21319bd42509787b103c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kq9j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kxsqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.143139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.143170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.143179 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.143217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.143228 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.150586 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f906a2-953a-48fd-8921-0ddd6a2fa5f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://704ddfae3d21040c06370a90e573eb9b5988a0d044f7c396af76c463469c0fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04b514ddfd991946a302788f0c51931a22fd56e93ec9fe8764cd4cc119d507f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ccf24d94f3411b473af54401b2c7ad9d1c29bc2465007846f2b18d5638e09e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddc07a162434b3da0304c874b04eaba5c18d8ba033804684ab507cae89802d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.171608 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.187423 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.203610 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dcr2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceb09eae-57c9-4a8e-95d5-aa40e49f7316\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:10Z\\\",\\\"message\\\":\\\"2025-10-14T14:49:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580\\\\n2025-10-14T14:49:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_962995be-e23e-4074-931a-53be49815580 to /host/opt/cni/bin/\\\\n2025-10-14T14:49:25Z [verbose] multus-daemon started\\\\n2025-10-14T14:49:25Z [verbose] Readiness Indicator file check\\\\n2025-10-14T14:50:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:50:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfldp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dcr2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.223397 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"070393d9-65ec-4cf1-a04a-c3eb9addbf91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6095ffe55857be058e3deefd851f8b2a6a449fbc8dbba34ff608953b3a6e479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c290e0aef955787b3d76e22df9ea12f6115430221197b05a85e49e6bb309491\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6daffb6f1e28d121aedcfca980cfcea8a200f161677dd081595936afa3609d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d4b5715de0cca29bdfa154b4dec5b4dc188faed8aa2e26dad8a1618ae1b0a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2132251c91e7468b83c7d2a718dc036df2a862c3bf073dc613bbd53ced0ba0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e071648aa2270283e61165bd66265c92f2eb1e68d2dc178c1ff04838d68e93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82ff168d38f7594e75b462ad657408e4b61e300312eda061bcce5ae71ab7999b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5p7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vqrjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.235060 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vtscw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b36dd73-c75d-446e-85fe-d11afdd5a816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mwnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vtscw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.243306 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0840e458-6b35-4dcd-a8ca-57479a256d75\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14cf97a4526994bafc923e20f51157fe84ec6690b3bba1f2210a43105a2ce6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420bb77c1e6cdfa7d07b36a04764f9404a1ada3d66e58fa5444fc93d8981bd11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.245511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.245559 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.245570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.245582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.245590 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.256978 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f615771eb7f0af63180911c7dc504e0e726ca42ba86635df50af430345fa2eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.267484 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a50d28748b272168603b0ce86a0ea41c8ed7cca35032e23fc2b8c0a0a51b4df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.277297 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjnk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6385a106-293c-455e-99ef-9810b91fec6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bb6cf0e9c781dbd589623dc328ec65c6136da31d68e9621d604bd707167fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbfft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjnk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.302015 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a92ec1-e2b0-407d-990e-ce52a980b64b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T14:50:23Z\\\",\\\"message\\\":\\\"50:22.932667 6874 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1014 14:50:22.932675 6874 obj_retry.go:409] Going to retry *v1.Pod resource setup for 10 objects: [openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn openshift-kube-apiserver/kube-apiserver-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-network-diagnostics/network-check-target-xd92c openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-multus/network-metrics-daemon-vtscw]\\\\nI1014 14:50:22.932688 6874 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 14:50:22.932702 6874 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-vtscw before timer (time: 2025-10-14 14:50:23.968863246 +0000 UTC m=+1.572788099): skip\\\\nI1014 14:50:22.932717 6874 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 14:50:22.932757 6874 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:50:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg7wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mdvx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.326644 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d4072e2-f9ba-4a39-8851-88636f28a4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70a4c599edfecc88f1bcb70451da0696979cc219aec2184ecbaff3c7d858c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12e3ab9c6fa95b67b5a36c3a8c049c238f73bb62c7f2b40a26d634659fef57e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d265778badb51c9f38b555d3feeed477e44578cbb3d85ef94aa3ecb5a933671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7c943f2b7364b742ce8aa1cf830b7685fb255d937095ffad093a7b1b754578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c419ed8ddc72e3c7ec1f4ee259ef4ff6dbe17a9189e9ad75a9b9d89d0539cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbd016517939e2f7643b874a2374e6d913f3acd24493968db856ee4e36e01d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c38c24c019f0cc5d87832e1ea585df9e3ee67769f84d4dbc85f349ce6a46df20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cc7b9487d0f6d3983148d7039ed0d02e06822b64fb6686edd993a543be6795c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.340726 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c9e4163-5c6e-432e-a102-9ea604c52670\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77cb6a052eba125540bca03f3d4c5012388c97fb7721e5507173d4734d2728ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a14ab9e9634b0e837612ed009d665635a9b30cc4a20a9228dadcb9cccb10e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeb2057829620ddc0d57b7ff0ccefdded3a71eab285e9a46b06d0eb537822ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e86f01da3401fbcfea9eda8ece7353dcabf42a9614161372fdccfb85abd37ab2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c38313b5597e24608aa8ef498b659c6ea8d81061f7724a14c8e4a31c2b868e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T14:49:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 14:49:03.097197 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 14:49:03.099743 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3871451681/tls.crt::/tmp/serving-cert-3871451681/tls.key\\\\\\\"\\\\nI1014 14:49:18.425255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 14:49:18.462252 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 14:49:18.462351 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 14:49:18.462424 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 14:49:18.462452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 14:49:18.467229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 14:49:18.467314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 14:49:18.467358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 14:49:18.467378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 14:49:18.467397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 14:49:18.467417 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 14:49:18.467271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 14:49:18.469227 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7b095ca0aef39e82c4d0b5e477f15fc44920c46547af9211936d46179d0582\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4affcb9a1524399d3b8c9f0b58ff8280f537d2d5f9b0d39a51b6019a31b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.348524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.348560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.348597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.348614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.348626 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.353978 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fecd02c098b0837538958a4538ded08941c78076b35d9c42b2ac688483e16a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d1cdd6b8745d64ca6832bd164171b26ff1eb179f504da5619ffdf783b93ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.369222 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6436186e-e1ba-4c37-b8f9-210de837a051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ec68212eca188728bb029e63f832fbc7cf589c186654e03ec5127f97efb3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T14:49:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2x22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T14:49:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6ldv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T14:50:29Z is after 2025-08-24T17:21:41Z" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.450840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.450910 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.450921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.450963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.450976 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.553842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.553895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.553912 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.553933 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.553950 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.656908 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.656949 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.656962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.656979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.656992 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.759878 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.759920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.759934 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.759951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.759964 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.863109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.863191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.863212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.863239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.863259 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.966347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.966467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.966487 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.966511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:29 crc kubenswrapper[4860]: I1014 14:50:29.966529 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:29Z","lastTransitionTime":"2025-10-14T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.068504 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.068904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.068923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.068947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.068964 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.171173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.171228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.171246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.171272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.171290 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.273866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.273904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.273914 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.273929 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.273939 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.376096 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.376134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.376143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.376157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.376166 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.478685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.478721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.478732 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.478748 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.478758 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.581619 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.581661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.581672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.581688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.581700 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.684752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.684792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.684801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.684817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.684827 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.787818 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.787859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.787871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.787889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.787900 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.890064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.890097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.890106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.890118 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.890126 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.992781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.992838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.992854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.992877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:30 crc kubenswrapper[4860]: I1014 14:50:30.992893 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:30Z","lastTransitionTime":"2025-10-14T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.060583 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.060616 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.060712 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:31 crc kubenswrapper[4860]: E1014 14:50:31.060715 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:31 crc kubenswrapper[4860]: E1014 14:50:31.060842 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:31 crc kubenswrapper[4860]: E1014 14:50:31.060937 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.061016 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:31 crc kubenswrapper[4860]: E1014 14:50:31.061167 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.095259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.095325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.095348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.095375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.095396 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.200230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.200301 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.200324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.200354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.200377 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.302776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.302830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.302849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.302872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.302892 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.404900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.404974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.404987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.405002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.405012 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.508139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.508209 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.508219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.508233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.508243 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.610145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.610199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.610218 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.610242 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.610260 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.712980 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.713022 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.713059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.713075 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.713088 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.815072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.815117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.815128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.815143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.815155 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.917462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.917519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.917536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.917562 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:31 crc kubenswrapper[4860]: I1014 14:50:31.917578 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:31Z","lastTransitionTime":"2025-10-14T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.020256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.020311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.020328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.020354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.020378 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.122864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.122967 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.122988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.123016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.123073 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.226042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.226094 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.226111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.226131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.226145 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.329300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.329362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.329382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.329407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.329424 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.432132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.432165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.432177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.432191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.432200 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.534866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.534915 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.534931 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.534953 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.534970 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.637296 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.637343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.637359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.637385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.637402 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.739540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.739577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.739588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.739603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.739615 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.841780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.841827 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.841838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.841856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.841869 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.944357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.944396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.944405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.944422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:32 crc kubenswrapper[4860]: I1014 14:50:32.944433 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:32Z","lastTransitionTime":"2025-10-14T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.046664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.046737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.046751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.046768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.046779 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.061265 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.061290 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:33 crc kubenswrapper[4860]: E1014 14:50:33.061401 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.061404 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.061436 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:33 crc kubenswrapper[4860]: E1014 14:50:33.061576 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:33 crc kubenswrapper[4860]: E1014 14:50:33.061696 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:33 crc kubenswrapper[4860]: E1014 14:50:33.061802 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.149648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.149694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.149717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.149745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.149762 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.252732 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.252765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.252773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.252787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.252798 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.355788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.355829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.355838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.355853 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.355863 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.458325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.458438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.458452 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.458468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.458480 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.561098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.561147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.561156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.561170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.561178 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.663012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.663092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.663106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.663120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.663129 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.765632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.765749 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.765761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.765775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.765784 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.868706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.868742 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.868749 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.868761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.868769 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.971310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.971347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.971358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.971373 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:33 crc kubenswrapper[4860]: I1014 14:50:33.971383 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:33Z","lastTransitionTime":"2025-10-14T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.073462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.073493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.073501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.073516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.073525 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.175658 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.175703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.175716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.175731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.175743 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.278288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.278332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.278343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.278359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.278372 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.380135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.380167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.380175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.380187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.380195 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.482786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.482814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.482822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.482835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.482844 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.585645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.585682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.585694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.585709 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.585718 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.688282 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.688357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.688374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.688393 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.688406 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.791505 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.791595 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.791614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.791637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.791654 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.894204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.894270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.894282 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.894297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.894307 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.997586 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.997666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.997682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.997705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:34 crc kubenswrapper[4860]: I1014 14:50:34.997718 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:34Z","lastTransitionTime":"2025-10-14T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.061363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.061453 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.061389 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:35 crc kubenswrapper[4860]: E1014 14:50:35.061853 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:35 crc kubenswrapper[4860]: E1014 14:50:35.061694 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.061937 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:35 crc kubenswrapper[4860]: E1014 14:50:35.062068 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:35 crc kubenswrapper[4860]: E1014 14:50:35.062179 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.100916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.100960 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.100972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.100987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.100997 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.203943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.203989 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.203998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.204013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.204039 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.306389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.306445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.306462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.306482 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.306497 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.408863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.408912 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.408924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.408941 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.408953 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.510854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.510903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.510919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.510942 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.510957 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.613474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.613523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.613537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.613558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.613572 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.716647 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.716693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.716704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.716727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.716748 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.820557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.820617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.820635 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.820660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.820682 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.922998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.923120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.923143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.923171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:35 crc kubenswrapper[4860]: I1014 14:50:35.923194 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:35Z","lastTransitionTime":"2025-10-14T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.025796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.025856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.025873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.025893 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.025906 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.062259 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 14:50:36 crc kubenswrapper[4860]: E1014 14:50:36.062389 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.128365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.128407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.128416 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.128430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.128439 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.230649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.230683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.230719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.230736 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.230746 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.333201 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.333238 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.333248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.333263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.333273 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.435309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.435369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.435382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.435400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.435408 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.537725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.537761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.537771 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.537785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.537795 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.640047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.640077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.640085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.640098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.640122 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.653652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.653698 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.653711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.653724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.653732 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T14:50:36Z","lastTransitionTime":"2025-10-14T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.697283 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8"] Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.697738 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.702484 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.702616 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.702617 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.702744 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.739963 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.739946549 podStartE2EDuration="19.739946549s" podCreationTimestamp="2025-10-14 14:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.72140689 +0000 UTC m=+98.308190349" watchObservedRunningTime="2025-10-14 14:50:36.739946549 +0000 UTC m=+98.326729998" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.764229 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wjnk2" podStartSLOduration=78.764211986 podStartE2EDuration="1m18.764211986s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.764207146 +0000 UTC m=+98.350990605" watchObservedRunningTime="2025-10-14 14:50:36.764211986 +0000 UTC m=+98.350995435" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.783367 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a7d276bd-4b0f-47d0-810c-13c9498dffa2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.783412 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7d276bd-4b0f-47d0-810c-13c9498dffa2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.783435 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7d276bd-4b0f-47d0-810c-13c9498dffa2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.783581 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7d276bd-4b0f-47d0-810c-13c9498dffa2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.783639 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a7d276bd-4b0f-47d0-810c-13c9498dffa2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.813449 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=76.813435497 podStartE2EDuration="1m16.813435497s" podCreationTimestamp="2025-10-14 14:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.813067639 +0000 UTC m=+98.399851088" watchObservedRunningTime="2025-10-14 14:50:36.813435497 +0000 UTC m=+98.400218946" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.830538 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.830522752 podStartE2EDuration="1m17.830522752s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.830135204 +0000 UTC m=+98.416918673" watchObservedRunningTime="2025-10-14 14:50:36.830522752 +0000 UTC m=+98.417306191" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.867143 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.867124528 podStartE2EDuration="47.867124528s" podCreationTimestamp="2025-10-14 14:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.866583936 +0000 UTC m=+98.453367405" watchObservedRunningTime="2025-10-14 14:50:36.867124528 +0000 UTC m=+98.453907977" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.867349 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podStartSLOduration=78.867344654 podStartE2EDuration="1m18.867344654s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.854461522 +0000 UTC m=+98.441244981" watchObservedRunningTime="2025-10-14 14:50:36.867344654 +0000 UTC m=+98.454128103" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.884923 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7d276bd-4b0f-47d0-810c-13c9498dffa2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.884960 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a7d276bd-4b0f-47d0-810c-13c9498dffa2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.884989 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a7d276bd-4b0f-47d0-810c-13c9498dffa2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.885006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7d276bd-4b0f-47d0-810c-13c9498dffa2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.885022 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7d276bd-4b0f-47d0-810c-13c9498dffa2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.886095 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7d276bd-4b0f-47d0-810c-13c9498dffa2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.886326 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a7d276bd-4b0f-47d0-810c-13c9498dffa2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.886361 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a7d276bd-4b0f-47d0-810c-13c9498dffa2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.889398 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2thzv" podStartSLOduration=78.88938413 podStartE2EDuration="1m18.88938413s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.889204076 +0000 UTC m=+98.475987525" watchObservedRunningTime="2025-10-14 14:50:36.88938413 +0000 UTC m=+98.476167579" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.892104 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7d276bd-4b0f-47d0-810c-13c9498dffa2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.902905 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kxsqn" podStartSLOduration=77.902884345 podStartE2EDuration="1m17.902884345s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.902319302 +0000 UTC m=+98.489102751" watchObservedRunningTime="2025-10-14 14:50:36.902884345 +0000 UTC m=+98.489667804" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.911310 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7d276bd-4b0f-47d0-810c-13c9498dffa2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5j7h8\" (UID: \"a7d276bd-4b0f-47d0-810c-13c9498dffa2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.922670 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.922649011 podStartE2EDuration="1m14.922649011s" podCreationTimestamp="2025-10-14 14:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.921854793 +0000 UTC m=+98.508638242" watchObservedRunningTime="2025-10-14 14:50:36.922649011 +0000 UTC m=+98.509432460" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.956790 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dcr2g" podStartSLOduration=78.956769991 podStartE2EDuration="1m18.956769991s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.956257439 +0000 UTC m=+98.543040888" watchObservedRunningTime="2025-10-14 14:50:36.956769991 +0000 UTC m=+98.543553440" Oct 14 14:50:36 crc kubenswrapper[4860]: I1014 14:50:36.980076 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vqrjw" podStartSLOduration=78.980055316 podStartE2EDuration="1m18.980055316s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:36.971183747 +0000 UTC m=+98.557967196" watchObservedRunningTime="2025-10-14 14:50:36.980055316 +0000 UTC m=+98.566838765" Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.012968 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.063378 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:37 crc kubenswrapper[4860]: E1014 14:50:37.063496 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.063686 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:37 crc kubenswrapper[4860]: E1014 14:50:37.063742 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.063843 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:37 crc kubenswrapper[4860]: E1014 14:50:37.063892 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.063984 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:37 crc kubenswrapper[4860]: E1014 14:50:37.064043 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.288833 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:37 crc kubenswrapper[4860]: E1014 14:50:37.288963 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:50:37 crc kubenswrapper[4860]: E1014 14:50:37.289048 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs podName:2b36dd73-c75d-446e-85fe-d11afdd5a816 nodeName:}" failed. No retries permitted until 2025-10-14 14:51:41.289008008 +0000 UTC m=+162.875791457 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs") pod "network-metrics-daemon-vtscw" (UID: "2b36dd73-c75d-446e-85fe-d11afdd5a816") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.630266 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" event={"ID":"a7d276bd-4b0f-47d0-810c-13c9498dffa2","Type":"ContainerStarted","Data":"eb7f3f6727a193005860864d7fd47619c45d9b0f56ec2a1fa4eccab20ce5c590"} Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.630323 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" event={"ID":"a7d276bd-4b0f-47d0-810c-13c9498dffa2","Type":"ContainerStarted","Data":"429b5557d822b2657e368ee7a3b4e4d1bdcfe8d6399b13f2b9dc5330f5806394"} Oct 14 14:50:37 crc kubenswrapper[4860]: I1014 14:50:37.646963 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5j7h8" podStartSLOduration=79.646948744 podStartE2EDuration="1m19.646948744s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:50:37.646716989 +0000 UTC m=+99.233500448" watchObservedRunningTime="2025-10-14 14:50:37.646948744 +0000 UTC m=+99.233732193" Oct 14 14:50:39 crc kubenswrapper[4860]: I1014 14:50:39.060837 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:39 crc kubenswrapper[4860]: I1014 14:50:39.060895 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:39 crc kubenswrapper[4860]: I1014 14:50:39.060895 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:39 crc kubenswrapper[4860]: E1014 14:50:39.062600 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:39 crc kubenswrapper[4860]: I1014 14:50:39.062613 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:39 crc kubenswrapper[4860]: E1014 14:50:39.062777 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:39 crc kubenswrapper[4860]: E1014 14:50:39.062710 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:39 crc kubenswrapper[4860]: E1014 14:50:39.062945 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:41 crc kubenswrapper[4860]: I1014 14:50:41.060730 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:41 crc kubenswrapper[4860]: I1014 14:50:41.060836 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:41 crc kubenswrapper[4860]: I1014 14:50:41.060875 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:41 crc kubenswrapper[4860]: E1014 14:50:41.060992 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:41 crc kubenswrapper[4860]: E1014 14:50:41.061189 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:41 crc kubenswrapper[4860]: E1014 14:50:41.061216 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:41 crc kubenswrapper[4860]: I1014 14:50:41.062007 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:41 crc kubenswrapper[4860]: E1014 14:50:41.062317 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:43 crc kubenswrapper[4860]: I1014 14:50:43.061129 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:43 crc kubenswrapper[4860]: I1014 14:50:43.061247 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:43 crc kubenswrapper[4860]: I1014 14:50:43.062082 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:43 crc kubenswrapper[4860]: I1014 14:50:43.062161 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:43 crc kubenswrapper[4860]: E1014 14:50:43.062276 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:43 crc kubenswrapper[4860]: E1014 14:50:43.062389 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:43 crc kubenswrapper[4860]: E1014 14:50:43.062471 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:43 crc kubenswrapper[4860]: E1014 14:50:43.062702 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:45 crc kubenswrapper[4860]: I1014 14:50:45.060468 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:45 crc kubenswrapper[4860]: I1014 14:50:45.060515 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:45 crc kubenswrapper[4860]: I1014 14:50:45.060536 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:45 crc kubenswrapper[4860]: I1014 14:50:45.060650 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:45 crc kubenswrapper[4860]: E1014 14:50:45.060830 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:45 crc kubenswrapper[4860]: E1014 14:50:45.060974 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:45 crc kubenswrapper[4860]: E1014 14:50:45.061073 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:45 crc kubenswrapper[4860]: E1014 14:50:45.061132 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:47 crc kubenswrapper[4860]: I1014 14:50:47.061547 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:47 crc kubenswrapper[4860]: I1014 14:50:47.061612 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:47 crc kubenswrapper[4860]: I1014 14:50:47.061613 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:47 crc kubenswrapper[4860]: E1014 14:50:47.061706 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:47 crc kubenswrapper[4860]: I1014 14:50:47.061762 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:47 crc kubenswrapper[4860]: E1014 14:50:47.061852 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:47 crc kubenswrapper[4860]: E1014 14:50:47.061947 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:47 crc kubenswrapper[4860]: E1014 14:50:47.062096 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:49 crc kubenswrapper[4860]: I1014 14:50:49.060761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:49 crc kubenswrapper[4860]: E1014 14:50:49.060904 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:49 crc kubenswrapper[4860]: I1014 14:50:49.061232 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:49 crc kubenswrapper[4860]: E1014 14:50:49.062622 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:49 crc kubenswrapper[4860]: I1014 14:50:49.062679 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:49 crc kubenswrapper[4860]: I1014 14:50:49.062687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:49 crc kubenswrapper[4860]: E1014 14:50:49.062775 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:49 crc kubenswrapper[4860]: E1014 14:50:49.062880 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:51 crc kubenswrapper[4860]: I1014 14:50:51.061109 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:51 crc kubenswrapper[4860]: E1014 14:50:51.061574 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:51 crc kubenswrapper[4860]: I1014 14:50:51.061211 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:51 crc kubenswrapper[4860]: E1014 14:50:51.061671 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:51 crc kubenswrapper[4860]: I1014 14:50:51.061481 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:51 crc kubenswrapper[4860]: E1014 14:50:51.061737 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:51 crc kubenswrapper[4860]: I1014 14:50:51.061462 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:51 crc kubenswrapper[4860]: I1014 14:50:51.061827 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 14:50:51 crc kubenswrapper[4860]: E1014 14:50:51.061841 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:51 crc kubenswrapper[4860]: E1014 14:50:51.061976 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mdvx2_openshift-ovn-kubernetes(87a92ec1-e2b0-407d-990e-ce52a980b64b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" Oct 14 14:50:53 crc kubenswrapper[4860]: I1014 14:50:53.061504 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:53 crc kubenswrapper[4860]: I1014 14:50:53.061551 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:53 crc kubenswrapper[4860]: E1014 14:50:53.061644 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:53 crc kubenswrapper[4860]: I1014 14:50:53.061510 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:53 crc kubenswrapper[4860]: E1014 14:50:53.061755 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:53 crc kubenswrapper[4860]: E1014 14:50:53.061913 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:53 crc kubenswrapper[4860]: I1014 14:50:53.061439 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:53 crc kubenswrapper[4860]: E1014 14:50:53.062121 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:55 crc kubenswrapper[4860]: I1014 14:50:55.061123 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:55 crc kubenswrapper[4860]: I1014 14:50:55.061145 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:55 crc kubenswrapper[4860]: E1014 14:50:55.061237 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:55 crc kubenswrapper[4860]: I1014 14:50:55.061373 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:55 crc kubenswrapper[4860]: I1014 14:50:55.061418 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:55 crc kubenswrapper[4860]: E1014 14:50:55.061509 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:55 crc kubenswrapper[4860]: E1014 14:50:55.061585 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:55 crc kubenswrapper[4860]: E1014 14:50:55.061744 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.060892 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:57 crc kubenswrapper[4860]: E1014 14:50:57.061010 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.061219 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.061256 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:57 crc kubenswrapper[4860]: E1014 14:50:57.061393 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:57 crc kubenswrapper[4860]: E1014 14:50:57.061267 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.061529 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:57 crc kubenswrapper[4860]: E1014 14:50:57.061576 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.688482 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/1.log" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.688787 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/0.log" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.688819 4860 generic.go:334] "Generic (PLEG): container finished" podID="ceb09eae-57c9-4a8e-95d5-aa40e49f7316" containerID="4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab" exitCode=1 Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.688843 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerDied","Data":"4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab"} Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.688871 4860 scope.go:117] "RemoveContainer" containerID="854ba80071764f94417e4fc8e83513e6559f9ab80e0057f05c1fd76c5a83420f" Oct 14 14:50:57 crc kubenswrapper[4860]: I1014 14:50:57.689775 4860 scope.go:117] "RemoveContainer" containerID="4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab" Oct 14 14:50:57 crc kubenswrapper[4860]: E1014 14:50:57.689963 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dcr2g_openshift-multus(ceb09eae-57c9-4a8e-95d5-aa40e49f7316)\"" pod="openshift-multus/multus-dcr2g" podUID="ceb09eae-57c9-4a8e-95d5-aa40e49f7316" Oct 14 14:50:58 crc kubenswrapper[4860]: I1014 14:50:58.694484 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/1.log" Oct 14 14:50:59 crc kubenswrapper[4860]: E1014 14:50:59.042450 4860 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 14 14:50:59 crc kubenswrapper[4860]: I1014 14:50:59.060516 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:50:59 crc kubenswrapper[4860]: I1014 14:50:59.060539 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:50:59 crc kubenswrapper[4860]: I1014 14:50:59.060577 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:50:59 crc kubenswrapper[4860]: I1014 14:50:59.060643 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:50:59 crc kubenswrapper[4860]: E1014 14:50:59.061433 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:50:59 crc kubenswrapper[4860]: E1014 14:50:59.061577 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:50:59 crc kubenswrapper[4860]: E1014 14:50:59.061619 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:50:59 crc kubenswrapper[4860]: E1014 14:50:59.061656 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:50:59 crc kubenswrapper[4860]: E1014 14:50:59.158744 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 14:51:01 crc kubenswrapper[4860]: I1014 14:51:01.060750 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:01 crc kubenswrapper[4860]: I1014 14:51:01.060797 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:01 crc kubenswrapper[4860]: I1014 14:51:01.060744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:01 crc kubenswrapper[4860]: I1014 14:51:01.060862 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:01 crc kubenswrapper[4860]: E1014 14:51:01.060884 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:01 crc kubenswrapper[4860]: E1014 14:51:01.060927 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:01 crc kubenswrapper[4860]: E1014 14:51:01.061112 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:01 crc kubenswrapper[4860]: E1014 14:51:01.061213 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:03 crc kubenswrapper[4860]: I1014 14:51:03.061555 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:03 crc kubenswrapper[4860]: I1014 14:51:03.061583 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:03 crc kubenswrapper[4860]: I1014 14:51:03.061555 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:03 crc kubenswrapper[4860]: I1014 14:51:03.061657 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:03 crc kubenswrapper[4860]: E1014 14:51:03.061686 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:03 crc kubenswrapper[4860]: E1014 14:51:03.061795 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:03 crc kubenswrapper[4860]: E1014 14:51:03.061860 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:03 crc kubenswrapper[4860]: E1014 14:51:03.061937 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:04 crc kubenswrapper[4860]: I1014 14:51:04.062022 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 14:51:04 crc kubenswrapper[4860]: E1014 14:51:04.159880 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 14:51:04 crc kubenswrapper[4860]: I1014 14:51:04.716750 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/3.log" Oct 14 14:51:04 crc kubenswrapper[4860]: I1014 14:51:04.721824 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerStarted","Data":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} Oct 14 14:51:04 crc kubenswrapper[4860]: I1014 14:51:04.722525 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:51:04 crc kubenswrapper[4860]: I1014 14:51:04.760579 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podStartSLOduration=105.760554346 podStartE2EDuration="1m45.760554346s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:04.759266007 +0000 UTC m=+126.346049466" watchObservedRunningTime="2025-10-14 14:51:04.760554346 +0000 UTC m=+126.347337795" Oct 14 14:51:05 crc kubenswrapper[4860]: I1014 14:51:05.061163 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:05 crc kubenswrapper[4860]: I1014 14:51:05.061194 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:05 crc kubenswrapper[4860]: I1014 14:51:05.061243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:05 crc kubenswrapper[4860]: E1014 14:51:05.061295 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:05 crc kubenswrapper[4860]: E1014 14:51:05.061350 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:05 crc kubenswrapper[4860]: I1014 14:51:05.061176 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:05 crc kubenswrapper[4860]: E1014 14:51:05.061513 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:05 crc kubenswrapper[4860]: E1014 14:51:05.061612 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:05 crc kubenswrapper[4860]: I1014 14:51:05.236267 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vtscw"] Oct 14 14:51:05 crc kubenswrapper[4860]: I1014 14:51:05.724441 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:05 crc kubenswrapper[4860]: E1014 14:51:05.724570 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:07 crc kubenswrapper[4860]: I1014 14:51:07.061287 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:07 crc kubenswrapper[4860]: I1014 14:51:07.061331 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:07 crc kubenswrapper[4860]: E1014 14:51:07.061431 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:07 crc kubenswrapper[4860]: I1014 14:51:07.061287 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:07 crc kubenswrapper[4860]: I1014 14:51:07.061532 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:07 crc kubenswrapper[4860]: E1014 14:51:07.061624 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:07 crc kubenswrapper[4860]: E1014 14:51:07.061708 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:07 crc kubenswrapper[4860]: E1014 14:51:07.061766 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:09 crc kubenswrapper[4860]: I1014 14:51:09.060771 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:09 crc kubenswrapper[4860]: I1014 14:51:09.060807 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:09 crc kubenswrapper[4860]: I1014 14:51:09.060900 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:09 crc kubenswrapper[4860]: I1014 14:51:09.060971 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:09 crc kubenswrapper[4860]: E1014 14:51:09.061857 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:09 crc kubenswrapper[4860]: E1014 14:51:09.061917 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:09 crc kubenswrapper[4860]: E1014 14:51:09.061993 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:09 crc kubenswrapper[4860]: E1014 14:51:09.062203 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:09 crc kubenswrapper[4860]: E1014 14:51:09.161477 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 14:51:10 crc kubenswrapper[4860]: I1014 14:51:10.061141 4860 scope.go:117] "RemoveContainer" containerID="4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab" Oct 14 14:51:10 crc kubenswrapper[4860]: I1014 14:51:10.739450 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/1.log" Oct 14 14:51:10 crc kubenswrapper[4860]: I1014 14:51:10.739880 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerStarted","Data":"e4348dbcafb0a136a8778e5f340f7e1294d56b5f49a540dcf5c355211a7a4501"} Oct 14 14:51:11 crc kubenswrapper[4860]: I1014 14:51:11.061328 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:11 crc kubenswrapper[4860]: I1014 14:51:11.061344 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:11 crc kubenswrapper[4860]: E1014 14:51:11.062151 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:11 crc kubenswrapper[4860]: I1014 14:51:11.061405 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:11 crc kubenswrapper[4860]: E1014 14:51:11.062215 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:11 crc kubenswrapper[4860]: I1014 14:51:11.061351 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:11 crc kubenswrapper[4860]: E1014 14:51:11.062322 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:11 crc kubenswrapper[4860]: E1014 14:51:11.062082 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:13 crc kubenswrapper[4860]: I1014 14:51:13.061042 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:13 crc kubenswrapper[4860]: I1014 14:51:13.061160 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:13 crc kubenswrapper[4860]: E1014 14:51:13.061298 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 14:51:13 crc kubenswrapper[4860]: I1014 14:51:13.061317 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:13 crc kubenswrapper[4860]: I1014 14:51:13.061358 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:13 crc kubenswrapper[4860]: E1014 14:51:13.061397 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 14:51:13 crc kubenswrapper[4860]: E1014 14:51:13.061487 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtscw" podUID="2b36dd73-c75d-446e-85fe-d11afdd5a816" Oct 14 14:51:13 crc kubenswrapper[4860]: E1014 14:51:13.061585 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.060761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.060791 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.060822 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.060851 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.064203 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.064687 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.066313 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.066469 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.066760 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 14 14:51:15 crc kubenswrapper[4860]: I1014 14:51:15.067079 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.190930 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.222953 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m84ss"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.223421 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.224341 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z2m7d"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.225209 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.225712 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.226149 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.236221 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.236575 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.236621 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.238539 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.243224 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.243226 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.243223 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246262 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246268 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246480 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246498 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246516 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246763 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246780 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.246898 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.247001 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.247419 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.247578 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.250519 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.250528 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.250772 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.253623 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.253626 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.257450 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jnwqb"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.258039 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.261998 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.262256 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.264219 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-74th4"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.264775 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.266790 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.267393 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.268205 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.268366 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.269553 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2j8s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.269940 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.270410 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.270746 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.272222 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.274455 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.281818 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.282356 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.282595 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.282646 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.282692 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.282653 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.282973 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.283184 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.283365 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.283502 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.283629 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.283733 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.284366 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.284877 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.285068 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.285388 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.285964 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.286358 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.286472 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.286477 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.286554 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.298286 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xlzj"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.298874 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.299590 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.309812 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.310947 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sr5b4"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.311360 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.313345 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.314009 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.314185 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.314982 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330430 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb25ff1-18af-4f95-a3e7-09472726d3df-serving-cert\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330467 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e925912-cc05-4c2b-8de7-ba05cd298123-images\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330484 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e925912-cc05-4c2b-8de7-ba05cd298123-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330506 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-policies\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330524 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330543 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-audit\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330562 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330578 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvr7\" (UniqueName: \"kubernetes.io/projected/34a37609-7fba-4b24-93ab-36d55d11dfe8-kube-api-access-2kvr7\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330597 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330614 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b44ba50-cb4c-4014-90ca-ae91d2875037-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330634 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-image-import-ca\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330648 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-client-ca\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330665 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-serving-cert\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330682 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-etcd-serving-ca\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330701 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtpnw\" (UniqueName: \"kubernetes.io/projected/bdb25ff1-18af-4f95-a3e7-09472726d3df-kube-api-access-wtpnw\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330717 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16ad23c1-8e88-4556-85ce-0eca934160f9-audit-dir\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330736 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-encryption-config\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330757 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5aa947ac-94f3-4582-a725-18082f637305-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330777 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-config\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330795 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330810 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjczm\" (UniqueName: \"kubernetes.io/projected/5aa947ac-94f3-4582-a725-18082f637305-kube-api-access-rjczm\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330839 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdld4\" (UniqueName: \"kubernetes.io/projected/16ad23c1-8e88-4556-85ce-0eca934160f9-kube-api-access-fdld4\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52kc\" (UniqueName: \"kubernetes.io/projected/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-kube-api-access-f52kc\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330871 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b44ba50-cb4c-4014-90ca-ae91d2875037-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330889 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e78f6aa9-6284-4c76-b303-53bdd34b70bf-auth-proxy-config\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330910 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-dir\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330925 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-serving-cert\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330954 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62e3653a-9388-4335-820e-89652ddadba0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ccjhg\" (UID: \"62e3653a-9388-4335-820e-89652ddadba0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330973 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.330989 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16ad23c1-8e88-4556-85ce-0eca934160f9-node-pullsecrets\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331005 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-config\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331037 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331056 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331070 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhcgp\" (UniqueName: \"kubernetes.io/projected/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-kube-api-access-mhcgp\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331107 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331126 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-serving-cert\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331146 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331163 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331178 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa947ac-94f3-4582-a725-18082f637305-serving-cert\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331207 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-etcd-client\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331223 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj67c\" (UniqueName: \"kubernetes.io/projected/e78f6aa9-6284-4c76-b303-53bdd34b70bf-kube-api-access-sj67c\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331241 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-etcd-client\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331258 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331276 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-serving-cert\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331293 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34a37609-7fba-4b24-93ab-36d55d11dfe8-audit-dir\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331311 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331328 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-config\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331353 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78f6aa9-6284-4c76-b303-53bdd34b70bf-config\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331370 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e925912-cc05-4c2b-8de7-ba05cd298123-config\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331387 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331405 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-client-ca\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331419 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331436 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-config\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331453 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-encryption-config\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331480 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331498 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e78f6aa9-6284-4c76-b303-53bdd34b70bf-machine-approver-tls\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331518 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpxb\" (UniqueName: \"kubernetes.io/projected/62e3653a-9388-4335-820e-89652ddadba0-kube-api-access-lxpxb\") pod \"cluster-samples-operator-665b6dd947-ccjhg\" (UID: \"62e3653a-9388-4335-820e-89652ddadba0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331537 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331554 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzx5\" (UniqueName: \"kubernetes.io/projected/8e925912-cc05-4c2b-8de7-ba05cd298123-kube-api-access-6fzx5\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331571 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wzsc\" (UniqueName: \"kubernetes.io/projected/1271b3e0-b6e9-45cf-a267-ab013c556fc6-kube-api-access-6wzsc\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331607 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-audit-policies\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.331625 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52hg\" (UniqueName: \"kubernetes.io/projected/1b44ba50-cb4c-4014-90ca-ae91d2875037-kube-api-access-f52hg\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.346649 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.346807 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.346915 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.346995 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.347083 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.347378 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.347506 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.348721 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.348836 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.348961 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.349081 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.349334 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.349391 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.349540 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.349832 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.349941 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350048 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350122 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350395 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350464 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350537 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350540 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.350945 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.352089 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.354147 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.354267 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.354383 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m84ss"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.354417 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.354434 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bvxsd"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.354827 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.355109 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.355625 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.359464 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.359675 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.361768 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.362023 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.362276 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.362446 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.378137 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2j8s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.378543 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.378660 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.379547 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z2m7d"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.380105 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.380396 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.380517 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.380815 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.383100 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b4brk"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.383146 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.383624 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.383641 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.384080 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.384147 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.387016 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.387243 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mslb8"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.387777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.397997 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.398834 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.400638 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.402776 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.410218 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c7tw5"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.421263 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.421435 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.421532 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.423531 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-smx67"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.423938 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.424089 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-msfwt"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.424187 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.424412 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.424538 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.424795 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.424952 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.425413 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.427545 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.431329 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.431747 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2t9qw"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432110 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432534 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432625 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432802 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432874 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62e3653a-9388-4335-820e-89652ddadba0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ccjhg\" (UID: \"62e3653a-9388-4335-820e-89652ddadba0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432912 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-serving-cert\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432934 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16ad23c1-8e88-4556-85ce-0eca934160f9-node-pullsecrets\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432980 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-config\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.432996 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433011 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhcgp\" (UniqueName: \"kubernetes.io/projected/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-kube-api-access-mhcgp\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433048 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ccd794b-61d7-4a05-a75a-c6ef83877769-bound-sa-token\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433064 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433079 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-serving-cert\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433096 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czzj\" (UniqueName: \"kubernetes.io/projected/41466bd8-8531-4987-b1b6-ef965ebe180a-kube-api-access-7czzj\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433110 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433124 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa947ac-94f3-4582-a725-18082f637305-serving-cert\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433162 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433185 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-etcd-client\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj67c\" (UniqueName: \"kubernetes.io/projected/e78f6aa9-6284-4c76-b303-53bdd34b70bf-kube-api-access-sj67c\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-etcd-client\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433229 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433244 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-serving-cert\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433260 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34a37609-7fba-4b24-93ab-36d55d11dfe8-audit-dir\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433276 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-config\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433293 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41c5f7d-d392-418f-af58-6f69862c74ea-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433309 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433329 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78f6aa9-6284-4c76-b303-53bdd34b70bf-config\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433353 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433369 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-client-ca\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433415 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-client\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433434 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e925912-cc05-4c2b-8de7-ba05cd298123-config\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433450 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-config\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433466 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-encryption-config\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433481 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ccd794b-61d7-4a05-a75a-c6ef83877769-metrics-tls\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433496 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433512 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94ph\" (UniqueName: \"kubernetes.io/projected/4ccd794b-61d7-4a05-a75a-c6ef83877769-kube-api-access-j94ph\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433538 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e78f6aa9-6284-4c76-b303-53bdd34b70bf-machine-approver-tls\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433571 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxpxb\" (UniqueName: \"kubernetes.io/projected/62e3653a-9388-4335-820e-89652ddadba0-kube-api-access-lxpxb\") pod \"cluster-samples-operator-665b6dd947-ccjhg\" (UID: \"62e3653a-9388-4335-820e-89652ddadba0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433587 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433609 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433631 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wzsc\" (UniqueName: \"kubernetes.io/projected/1271b3e0-b6e9-45cf-a267-ab013c556fc6-kube-api-access-6wzsc\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433648 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-audit-policies\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433664 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52hg\" (UniqueName: \"kubernetes.io/projected/1b44ba50-cb4c-4014-90ca-ae91d2875037-kube-api-access-f52hg\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433679 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41466bd8-8531-4987-b1b6-ef965ebe180a-serving-cert\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433704 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzx5\" (UniqueName: \"kubernetes.io/projected/8e925912-cc05-4c2b-8de7-ba05cd298123-kube-api-access-6fzx5\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433721 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-config\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433749 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e925912-cc05-4c2b-8de7-ba05cd298123-images\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433765 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e925912-cc05-4c2b-8de7-ba05cd298123-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433781 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb25ff1-18af-4f95-a3e7-09472726d3df-serving-cert\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433797 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ccd794b-61d7-4a05-a75a-c6ef83877769-trusted-ca\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433814 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-policies\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-audit\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433870 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433886 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7s5\" (UniqueName: \"kubernetes.io/projected/ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b-kube-api-access-5v7s5\") pod \"dns-operator-744455d44c-c7tw5\" (UID: \"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433903 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41c5f7d-d392-418f-af58-6f69862c74ea-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433919 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2m86\" (UniqueName: \"kubernetes.io/projected/d41c5f7d-d392-418f-af58-6f69862c74ea-kube-api-access-n2m86\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433936 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvr7\" (UniqueName: \"kubernetes.io/projected/34a37609-7fba-4b24-93ab-36d55d11dfe8-kube-api-access-2kvr7\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433951 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433968 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b44ba50-cb4c-4014-90ca-ae91d2875037-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.433984 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7656db5-f224-4bf0-baea-63eefd6ad8f2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434000 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41c5f7d-d392-418f-af58-6f69862c74ea-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434015 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-image-import-ca\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434067 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-client-ca\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434084 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-ca\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434102 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-serving-cert\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434117 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-etcd-serving-ca\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434133 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtpnw\" (UniqueName: \"kubernetes.io/projected/bdb25ff1-18af-4f95-a3e7-09472726d3df-kube-api-access-wtpnw\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434142 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-config\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434150 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h484b\" (UniqueName: \"kubernetes.io/projected/d7656db5-f224-4bf0-baea-63eefd6ad8f2-kube-api-access-h484b\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434168 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b-metrics-tls\") pod \"dns-operator-744455d44c-c7tw5\" (UID: \"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434185 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-encryption-config\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434200 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16ad23c1-8e88-4556-85ce-0eca934160f9-audit-dir\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434216 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5aa947ac-94f3-4582-a725-18082f637305-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-config\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434250 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjczm\" (UniqueName: \"kubernetes.io/projected/5aa947ac-94f3-4582-a725-18082f637305-kube-api-access-rjczm\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434264 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434279 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdld4\" (UniqueName: \"kubernetes.io/projected/16ad23c1-8e88-4556-85ce-0eca934160f9-kube-api-access-fdld4\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434294 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-service-ca\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434319 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e78f6aa9-6284-4c76-b303-53bdd34b70bf-auth-proxy-config\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434336 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8mw\" (UniqueName: \"kubernetes.io/projected/6224e386-928b-4d64-a7f5-d43fb86e4b3a-kube-api-access-ws8mw\") pod \"downloads-7954f5f757-bvxsd\" (UID: \"6224e386-928b-4d64-a7f5-d43fb86e4b3a\") " pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434353 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52kc\" (UniqueName: \"kubernetes.io/projected/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-kube-api-access-f52kc\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434370 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b44ba50-cb4c-4014-90ca-ae91d2875037-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434386 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7656db5-f224-4bf0-baea-63eefd6ad8f2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.434404 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-dir\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.435001 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-dir\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.435023 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-audit\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.435969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.436877 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.437586 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.437748 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.442612 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-encryption-config\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.451150 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-serving-cert\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.451367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16ad23c1-8e88-4556-85ce-0eca934160f9-audit-dir\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.451703 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5aa947ac-94f3-4582-a725-18082f637305-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.451865 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62e3653a-9388-4335-820e-89652ddadba0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ccjhg\" (UID: \"62e3653a-9388-4335-820e-89652ddadba0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.452730 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-config\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.452830 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.452888 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16ad23c1-8e88-4556-85ce-0eca934160f9-node-pullsecrets\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.453684 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-config\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.454861 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-cv25g"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.455260 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.455328 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.456053 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.456890 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b44ba50-cb4c-4014-90ca-ae91d2875037-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.457744 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-image-import-ca\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.458399 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-client-ca\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.470088 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.470143 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.470573 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e78f6aa9-6284-4c76-b303-53bdd34b70bf-auth-proxy-config\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.470602 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jnwqb"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.470684 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.471180 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b44ba50-cb4c-4014-90ca-ae91d2875037-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.471933 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.472279 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.472848 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.473269 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78f6aa9-6284-4c76-b303-53bdd34b70bf-config\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.473769 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.473958 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.474063 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.474325 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-audit-policies\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.474740 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-client-ca\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.475057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e925912-cc05-4c2b-8de7-ba05cd298123-images\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.475713 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.475858 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.475952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.475981 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.476243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.476486 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-serving-cert\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.478603 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-policies\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.482404 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-serving-cert\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.482722 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.483487 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e925912-cc05-4c2b-8de7-ba05cd298123-config\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.483915 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-config\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.484181 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.476100 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.486305 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.486746 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.486839 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fml8s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.487210 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sr5b4"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.487303 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-smx67"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.487419 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.487775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16ad23c1-8e88-4556-85ce-0eca934160f9-etcd-serving-ca\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.487821 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-encryption-config\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.488133 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.488223 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.488538 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.488822 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e925912-cc05-4c2b-8de7-ba05cd298123-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.489146 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.489207 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34a37609-7fba-4b24-93ab-36d55d11dfe8-audit-dir\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.489239 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.489632 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.489798 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a37609-7fba-4b24-93ab-36d55d11dfe8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.476048 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.489879 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.493635 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.493855 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.493948 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.494649 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.494939 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.495048 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.496696 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xjwnb"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.498330 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.501001 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e78f6aa9-6284-4c76-b303-53bdd34b70bf-machine-approver-tls\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.502279 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.502506 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-serving-cert\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.502798 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.503286 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rr82d"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.503667 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.504223 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb25ff1-18af-4f95-a3e7-09472726d3df-serving-cert\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.504311 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.504485 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16ad23c1-8e88-4556-85ce-0eca934160f9-etcd-client\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.504868 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.506014 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.506261 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.513662 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa947ac-94f3-4582-a725-18082f637305-serving-cert\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.513881 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.513676 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/34a37609-7fba-4b24-93ab-36d55d11dfe8-etcd-client\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.514868 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.515988 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.517934 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.521155 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-srxmc"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.526739 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.528466 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bvxsd"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.528497 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c7tw5"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.528603 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.533481 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2t9qw"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.533581 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vnc8p"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.534580 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535404 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b-metrics-tls\") pod \"dns-operator-744455d44c-c7tw5\" (UID: \"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535433 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h484b\" (UniqueName: \"kubernetes.io/projected/d7656db5-f224-4bf0-baea-63eefd6ad8f2-kube-api-access-h484b\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535438 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-msfwt"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535475 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-service-ca\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535497 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws8mw\" (UniqueName: \"kubernetes.io/projected/6224e386-928b-4d64-a7f5-d43fb86e4b3a-kube-api-access-ws8mw\") pod \"downloads-7954f5f757-bvxsd\" (UID: \"6224e386-928b-4d64-a7f5-d43fb86e4b3a\") " pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535520 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7656db5-f224-4bf0-baea-63eefd6ad8f2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535552 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ccd794b-61d7-4a05-a75a-c6ef83877769-bound-sa-token\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7czzj\" (UniqueName: \"kubernetes.io/projected/41466bd8-8531-4987-b1b6-ef965ebe180a-kube-api-access-7czzj\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535605 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41c5f7d-d392-418f-af58-6f69862c74ea-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-client\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535644 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ccd794b-61d7-4a05-a75a-c6ef83877769-metrics-tls\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535659 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94ph\" (UniqueName: \"kubernetes.io/projected/4ccd794b-61d7-4a05-a75a-c6ef83877769-kube-api-access-j94ph\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535709 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41466bd8-8531-4987-b1b6-ef965ebe180a-serving-cert\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535727 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-config\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535744 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ccd794b-61d7-4a05-a75a-c6ef83877769-trusted-ca\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535759 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v7s5\" (UniqueName: \"kubernetes.io/projected/ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b-kube-api-access-5v7s5\") pod \"dns-operator-744455d44c-c7tw5\" (UID: \"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535782 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41c5f7d-d392-418f-af58-6f69862c74ea-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535799 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2m86\" (UniqueName: \"kubernetes.io/projected/d41c5f7d-d392-418f-af58-6f69862c74ea-kube-api-access-n2m86\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535816 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7656db5-f224-4bf0-baea-63eefd6ad8f2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41c5f7d-d392-418f-af58-6f69862c74ea-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.535848 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-ca\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.536350 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-service-ca\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.536539 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-ca\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.536816 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xlzj"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.537014 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41466bd8-8531-4987-b1b6-ef965ebe180a-config\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.537704 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7656db5-f224-4bf0-baea-63eefd6ad8f2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.537747 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41c5f7d-d392-418f-af58-6f69862c74ea-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.537898 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b4brk"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.539416 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mslb8"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.539919 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7656db5-f224-4bf0-baea-63eefd6ad8f2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.539933 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41466bd8-8531-4987-b1b6-ef965ebe180a-etcd-client\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.539981 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.540404 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41c5f7d-d392-418f-af58-6f69862c74ea-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.541153 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fml8s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.542173 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.542395 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41466bd8-8531-4987-b1b6-ef965ebe180a-serving-cert\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.543246 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.544953 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.545666 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.547923 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.549142 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.551086 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.552846 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.554085 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.555446 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.556645 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vnc8p"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.558636 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.559745 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.561304 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xjwnb"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.562638 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.564598 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pn2jl"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.565497 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.566351 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rr82d"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.568854 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-srxmc"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.569908 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.570892 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8"] Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.585906 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.605328 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.625241 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.645320 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.650128 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ccd794b-61d7-4a05-a75a-c6ef83877769-metrics-tls\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.665170 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.692786 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.698118 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ccd794b-61d7-4a05-a75a-c6ef83877769-trusted-ca\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.705146 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.725418 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.728072 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b-metrics-tls\") pod \"dns-operator-744455d44c-c7tw5\" (UID: \"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.745416 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.768100 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.786169 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.805382 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.846107 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.865169 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.885691 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.905588 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.925055 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.945416 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.965774 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 14 14:51:17 crc kubenswrapper[4860]: I1014 14:51:17.986114 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.019351 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvr7\" (UniqueName: \"kubernetes.io/projected/34a37609-7fba-4b24-93ab-36d55d11dfe8-kube-api-access-2kvr7\") pod \"apiserver-7bbb656c7d-snzz9\" (UID: \"34a37609-7fba-4b24-93ab-36d55d11dfe8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.025174 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.046471 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.065802 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.099555 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdld4\" (UniqueName: \"kubernetes.io/projected/16ad23c1-8e88-4556-85ce-0eca934160f9-kube-api-access-fdld4\") pod \"apiserver-76f77b778f-z2m7d\" (UID: \"16ad23c1-8e88-4556-85ce-0eca934160f9\") " pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.120189 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjczm\" (UniqueName: \"kubernetes.io/projected/5aa947ac-94f3-4582-a725-18082f637305-kube-api-access-rjczm\") pod \"openshift-config-operator-7777fb866f-jwpzd\" (UID: \"5aa947ac-94f3-4582-a725-18082f637305\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.125152 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.128889 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.145335 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.151498 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.159015 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.165869 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.185599 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.205302 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.228448 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.245796 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.298332 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhcgp\" (UniqueName: \"kubernetes.io/projected/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-kube-api-access-mhcgp\") pod \"controller-manager-879f6c89f-m84ss\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.303386 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj67c\" (UniqueName: \"kubernetes.io/projected/e78f6aa9-6284-4c76-b303-53bdd34b70bf-kube-api-access-sj67c\") pod \"machine-approver-56656f9798-74th4\" (UID: \"e78f6aa9-6284-4c76-b303-53bdd34b70bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.322657 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52kc\" (UniqueName: \"kubernetes.io/projected/cb2e06ea-db08-4cee-a50a-0b5bf7cad13d-kube-api-access-f52kc\") pod \"authentication-operator-69f744f599-b2j8s\" (UID: \"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.325583 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.346688 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.365622 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.385723 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.405815 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.438632 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.443179 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wzsc\" (UniqueName: \"kubernetes.io/projected/1271b3e0-b6e9-45cf-a267-ab013c556fc6-kube-api-access-6wzsc\") pod \"oauth-openshift-558db77b4-5xlzj\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.460454 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52hg\" (UniqueName: \"kubernetes.io/projected/1b44ba50-cb4c-4014-90ca-ae91d2875037-kube-api-access-f52hg\") pod \"openshift-apiserver-operator-796bbdcf4f-lf94b\" (UID: \"1b44ba50-cb4c-4014-90ca-ae91d2875037\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.480543 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzx5\" (UniqueName: \"kubernetes.io/projected/8e925912-cc05-4c2b-8de7-ba05cd298123-kube-api-access-6fzx5\") pod \"machine-api-operator-5694c8668f-jnwqb\" (UID: \"8e925912-cc05-4c2b-8de7-ba05cd298123\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.485969 4860 request.go:700] Waited for 1.009555635s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.485983 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.487486 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.496098 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z2m7d"] Oct 14 14:51:18 crc kubenswrapper[4860]: W1014 14:51:18.502656 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode78f6aa9_6284_4c76_b303_53bdd34b70bf.slice/crio-495eb5c8550b34df0d40d985da3b4a4eb944173da7c8154bc6d0918bc79cbc84 WatchSource:0}: Error finding container 495eb5c8550b34df0d40d985da3b4a4eb944173da7c8154bc6d0918bc79cbc84: Status 404 returned error can't find the container with id 495eb5c8550b34df0d40d985da3b4a4eb944173da7c8154bc6d0918bc79cbc84 Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.505722 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 14 14:51:18 crc kubenswrapper[4860]: W1014 14:51:18.507897 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ad23c1_8e88_4556_85ce_0eca934160f9.slice/crio-980b95a20c772a9c69264755db5269653174a4df28931eeeaf538fa52a7d3601 WatchSource:0}: Error finding container 980b95a20c772a9c69264755db5269653174a4df28931eeeaf538fa52a7d3601: Status 404 returned error can't find the container with id 980b95a20c772a9c69264755db5269653174a4df28931eeeaf538fa52a7d3601 Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.526378 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.546245 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.551991 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.578008 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.579143 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.599398 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9"] Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.600977 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd"] Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.606209 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.611444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtpnw\" (UniqueName: \"kubernetes.io/projected/bdb25ff1-18af-4f95-a3e7-09472726d3df-kube-api-access-wtpnw\") pod \"route-controller-manager-6576b87f9c-2dz4s\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.627626 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 14 14:51:18 crc kubenswrapper[4860]: W1014 14:51:18.628977 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a37609_7fba_4b24_93ab_36d55d11dfe8.slice/crio-f048bf96bafd8de88d72a64672d208e96e62dad862387432805270f92c702dc9 WatchSource:0}: Error finding container f048bf96bafd8de88d72a64672d208e96e62dad862387432805270f92c702dc9: Status 404 returned error can't find the container with id f048bf96bafd8de88d72a64672d208e96e62dad862387432805270f92c702dc9 Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.638007 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m84ss"] Oct 14 14:51:18 crc kubenswrapper[4860]: W1014 14:51:18.638835 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aa947ac_94f3_4582_a725_18082f637305.slice/crio-a45d85ac4103ad6201b0da038b5f660036de3bcec66ba68809e17aaf37afc7af WatchSource:0}: Error finding container a45d85ac4103ad6201b0da038b5f660036de3bcec66ba68809e17aaf37afc7af: Status 404 returned error can't find the container with id a45d85ac4103ad6201b0da038b5f660036de3bcec66ba68809e17aaf37afc7af Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.644869 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 14 14:51:18 crc kubenswrapper[4860]: W1014 14:51:18.658442 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4179d4_5b4c_4b52_be97_9a0e9aa8c106.slice/crio-e27de943b243fef1d322f58c96c421bc80c71eb5362bf6bc36f93cc6aea53d64 WatchSource:0}: Error finding container e27de943b243fef1d322f58c96c421bc80c71eb5362bf6bc36f93cc6aea53d64: Status 404 returned error can't find the container with id e27de943b243fef1d322f58c96c421bc80c71eb5362bf6bc36f93cc6aea53d64 Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.666344 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.765355 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.768068 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.768243 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.768448 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.768844 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.771224 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxpxb\" (UniqueName: \"kubernetes.io/projected/62e3653a-9388-4335-820e-89652ddadba0-kube-api-access-lxpxb\") pod \"cluster-samples-operator-665b6dd947-ccjhg\" (UID: \"62e3653a-9388-4335-820e-89652ddadba0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.777248 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.786553 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" event={"ID":"5aa947ac-94f3-4582-a725-18082f637305","Type":"ContainerStarted","Data":"a45d85ac4103ad6201b0da038b5f660036de3bcec66ba68809e17aaf37afc7af"} Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.792636 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.809831 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.809872 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" event={"ID":"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106","Type":"ContainerStarted","Data":"e27de943b243fef1d322f58c96c421bc80c71eb5362bf6bc36f93cc6aea53d64"} Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.812398 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" event={"ID":"16ad23c1-8e88-4556-85ce-0eca934160f9","Type":"ContainerStarted","Data":"980b95a20c772a9c69264755db5269653174a4df28931eeeaf538fa52a7d3601"} Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.812538 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.815175 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" event={"ID":"34a37609-7fba-4b24-93ab-36d55d11dfe8","Type":"ContainerStarted","Data":"f048bf96bafd8de88d72a64672d208e96e62dad862387432805270f92c702dc9"} Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.816330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" event={"ID":"e78f6aa9-6284-4c76-b303-53bdd34b70bf","Type":"ContainerStarted","Data":"495eb5c8550b34df0d40d985da3b4a4eb944173da7c8154bc6d0918bc79cbc84"} Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.827400 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.848570 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.867313 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.887596 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.901686 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.906359 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.913369 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2j8s"] Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.925702 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.945929 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.968286 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.987742 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b"] Oct 14 14:51:18 crc kubenswrapper[4860]: I1014 14:51:18.987812 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.002611 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xlzj"] Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.009893 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: W1014 14:51:19.025259 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1271b3e0_b6e9_45cf_a267_ab013c556fc6.slice/crio-7e4c11a0af9dd66dd6f35a883116b88bca1c37bbe643bce9290fbe237cc80516 WatchSource:0}: Error finding container 7e4c11a0af9dd66dd6f35a883116b88bca1c37bbe643bce9290fbe237cc80516: Status 404 returned error can't find the container with id 7e4c11a0af9dd66dd6f35a883116b88bca1c37bbe643bce9290fbe237cc80516 Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.029003 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.047373 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.065047 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.080339 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jnwqb"] Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.087608 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.106093 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.125619 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.146681 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s"] Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.146887 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 14 14:51:19 crc kubenswrapper[4860]: W1014 14:51:19.162746 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdb25ff1_18af_4f95_a3e7_09472726d3df.slice/crio-f85af2f1d3b9c6ec45960bd9fe68d5d743a7aeef5a35c86982c22ef39ff02d88 WatchSource:0}: Error finding container f85af2f1d3b9c6ec45960bd9fe68d5d743a7aeef5a35c86982c22ef39ff02d88: Status 404 returned error can't find the container with id f85af2f1d3b9c6ec45960bd9fe68d5d743a7aeef5a35c86982c22ef39ff02d88 Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.164720 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.185208 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.205317 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg"] Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.205628 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.227105 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.246438 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.265691 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.286249 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.306409 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.327075 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.346807 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.365870 4860 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.385561 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.405555 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.426151 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.445719 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.465762 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.504109 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h484b\" (UniqueName: \"kubernetes.io/projected/d7656db5-f224-4bf0-baea-63eefd6ad8f2-kube-api-access-h484b\") pod \"openshift-controller-manager-operator-756b6f6bc6-xm46p\" (UID: \"d7656db5-f224-4bf0-baea-63eefd6ad8f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.504277 4860 request.go:700] Waited for 1.968488287s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.526613 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws8mw\" (UniqueName: \"kubernetes.io/projected/6224e386-928b-4d64-a7f5-d43fb86e4b3a-kube-api-access-ws8mw\") pod \"downloads-7954f5f757-bvxsd\" (UID: \"6224e386-928b-4d64-a7f5-d43fb86e4b3a\") " pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.562577 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7czzj\" (UniqueName: \"kubernetes.io/projected/41466bd8-8531-4987-b1b6-ef965ebe180a-kube-api-access-7czzj\") pod \"etcd-operator-b45778765-mslb8\" (UID: \"41466bd8-8531-4987-b1b6-ef965ebe180a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.566386 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ccd794b-61d7-4a05-a75a-c6ef83877769-bound-sa-token\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.583710 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94ph\" (UniqueName: \"kubernetes.io/projected/4ccd794b-61d7-4a05-a75a-c6ef83877769-kube-api-access-j94ph\") pod \"ingress-operator-5b745b69d9-smx67\" (UID: \"4ccd794b-61d7-4a05-a75a-c6ef83877769\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.601557 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v7s5\" (UniqueName: \"kubernetes.io/projected/ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b-kube-api-access-5v7s5\") pod \"dns-operator-744455d44c-c7tw5\" (UID: \"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.619335 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41c5f7d-d392-418f-af58-6f69862c74ea-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.641727 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2m86\" (UniqueName: \"kubernetes.io/projected/d41c5f7d-d392-418f-af58-6f69862c74ea-kube-api-access-n2m86\") pod \"cluster-image-registry-operator-dc59b4c8b-glhbr\" (UID: \"d41c5f7d-d392-418f-af58-6f69862c74ea\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.645820 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.648372 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.657243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.663480 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.665860 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.671228 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.677687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.684361 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.685599 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.781981 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-tls\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782008 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f24486a-4d47-4365-8930-f7eabfd033fa-serving-cert\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782060 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f24486a-4d47-4365-8930-f7eabfd033fa-trusted-ca\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782075 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-trusted-ca-bundle\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782117 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnm4\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-kube-api-access-cgnm4\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782163 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24486a-4d47-4365-8930-f7eabfd033fa-config\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782196 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-service-ca\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782212 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4wg\" (UniqueName: \"kubernetes.io/projected/0f24486a-4d47-4365-8930-f7eabfd033fa-kube-api-access-kl4wg\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782227 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-trusted-ca\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782242 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-console-config\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782281 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-oauth-serving-cert\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-serving-cert\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782323 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhh6m\" (UniqueName: \"kubernetes.io/projected/b1b285a3-b917-4698-860d-a00c351727f2-kube-api-access-bhh6m\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782337 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782374 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-oauth-config\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782394 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782442 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-certificates\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782463 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.782477 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-bound-sa-token\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: E1014 14:51:19.783847 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.28382462 +0000 UTC m=+141.870608169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.825727 4860 generic.go:334] "Generic (PLEG): container finished" podID="16ad23c1-8e88-4556-85ce-0eca934160f9" containerID="2ec4a0de0657a210649e81679a922ba9f72a1023afa4d957572ca6d2fe9040e8" exitCode=0 Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.825810 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" event={"ID":"16ad23c1-8e88-4556-85ce-0eca934160f9","Type":"ContainerDied","Data":"2ec4a0de0657a210649e81679a922ba9f72a1023afa4d957572ca6d2fe9040e8"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.827930 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" event={"ID":"8e925912-cc05-4c2b-8de7-ba05cd298123","Type":"ContainerStarted","Data":"d69e5c783abf127aad2ed3d550c7930f1be04ea1fdfb38c2d59e52f4ca747a5c"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.827958 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" event={"ID":"8e925912-cc05-4c2b-8de7-ba05cd298123","Type":"ContainerStarted","Data":"ff7fb8cd5a21a91e048c2a6992edcd4d5e36d61a4c3932034e732babbf72fb37"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.832488 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" event={"ID":"1271b3e0-b6e9-45cf-a267-ab013c556fc6","Type":"ContainerStarted","Data":"7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.832523 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" event={"ID":"1271b3e0-b6e9-45cf-a267-ab013c556fc6","Type":"ContainerStarted","Data":"7e4c11a0af9dd66dd6f35a883116b88bca1c37bbe643bce9290fbe237cc80516"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.832939 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.834888 4860 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5xlzj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.834936 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.871213 4860 generic.go:334] "Generic (PLEG): container finished" podID="34a37609-7fba-4b24-93ab-36d55d11dfe8" containerID="6fd160b18d67068142c1e38edbb680a684409b302b93697c2f9332fb46d5172f" exitCode=0 Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.872133 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" event={"ID":"34a37609-7fba-4b24-93ab-36d55d11dfe8","Type":"ContainerDied","Data":"6fd160b18d67068142c1e38edbb680a684409b302b93697c2f9332fb46d5172f"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.874401 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" event={"ID":"62e3653a-9388-4335-820e-89652ddadba0","Type":"ContainerStarted","Data":"1c8a9b6595e473137c62fe1238efb72d07fb7125caa1056fda09f42738b8eebb"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.887237 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:19 crc kubenswrapper[4860]: E1014 14:51:19.887964 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.387940441 +0000 UTC m=+141.974723890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.888542 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e05731e-4ea5-4b63-8b25-946dc11fa091-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.888572 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9wl\" (UniqueName: \"kubernetes.io/projected/6e05731e-4ea5-4b63-8b25-946dc11fa091-kube-api-access-9f9wl\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.888613 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zhg\" (UniqueName: \"kubernetes.io/projected/69e4a58b-d51f-447d-82e7-a3e4926c08a1-kube-api-access-25zhg\") pod \"package-server-manager-789f6589d5-pbr7g\" (UID: \"69e4a58b-d51f-447d-82e7-a3e4926c08a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.889205 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c88dcd06-e148-4382-945e-8700a7400f00-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.889243 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ccaaad20-81e9-4ba2-ab8b-91bbba22f17f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2t9qw\" (UID: \"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.894318 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-proxy-tls\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896108 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-mountpoint-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896167 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-stats-auth\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896431 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/762ac590-5dba-4663-a225-81765c4ae57a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6hj\" (UniqueName: \"kubernetes.io/projected/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-kube-api-access-4l6hj\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896503 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-service-ca\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896527 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl4wg\" (UniqueName: \"kubernetes.io/projected/0f24486a-4d47-4365-8930-f7eabfd033fa-kube-api-access-kl4wg\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896729 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-console-config\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896755 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-oauth-serving-cert\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896776 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vt6\" (UniqueName: \"kubernetes.io/projected/9a8bfc59-1b02-4a57-8785-146540f864db-kube-api-access-66vt6\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896801 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2726\" (UniqueName: \"kubernetes.io/projected/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-kube-api-access-x2726\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896828 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896868 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88dcd06-e148-4382-945e-8700a7400f00-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.896910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhh6m\" (UniqueName: \"kubernetes.io/projected/b1b285a3-b917-4698-860d-a00c351727f2-kube-api-access-bhh6m\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899173 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0162698-ec9f-47b8-896d-af15ae62668a-webhook-cert\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wg9\" (UniqueName: \"kubernetes.io/projected/7415bf9f-2145-43a1-b6b8-121b39180dbd-kube-api-access-s8wg9\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899244 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx972\" (UniqueName: \"kubernetes.io/projected/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-kube-api-access-bx972\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899266 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzdm7\" (UniqueName: \"kubernetes.io/projected/ccaaad20-81e9-4ba2-ab8b-91bbba22f17f-kube-api-access-vzdm7\") pod \"multus-admission-controller-857f4d67dd-2t9qw\" (UID: \"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899312 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7x55\" (UniqueName: \"kubernetes.io/projected/7b5e171c-1dce-4002-9207-474f3fad14a1-kube-api-access-p7x55\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899334 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915db808-f0c4-4b81-aaac-8dbfd3a5b201-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899379 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbbdj\" (UniqueName: \"kubernetes.io/projected/762ac590-5dba-4663-a225-81765c4ae57a-kube-api-access-bbbdj\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.899465 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nggpj\" (UniqueName: \"kubernetes.io/projected/bc47043b-7968-40cf-94f6-3c5a91a433a4-kube-api-access-nggpj\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.904520 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-console-config\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905239 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-oauth-serving-cert\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905323 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpl5\" (UniqueName: \"kubernetes.io/projected/eef02ff0-0b8a-4fd2-8ee5-162644e9f38c-kube-api-access-vrpl5\") pod \"migrator-59844c95c7-cbpmm\" (UID: \"eef02ff0-0b8a-4fd2-8ee5-162644e9f38c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905647 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47072811-afb6-4585-bf8c-4a0234aa5f1d-cert\") pod \"ingress-canary-rr82d\" (UID: \"47072811-afb6-4585-bf8c-4a0234aa5f1d\") " pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905688 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae97ab9f-b072-4cb2-85da-577097382ed5-service-ca-bundle\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-tls\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905850 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-service-ca\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905735 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0162698-ec9f-47b8-896d-af15ae62668a-apiservice-cert\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.905992 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5b47471-c477-482c-8462-62edd00df3bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ql4q7\" (UID: \"f5b47471-c477-482c-8462-62edd00df3bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906058 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-metrics-tls\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906079 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bjfq\" (UniqueName: \"kubernetes.io/projected/91f0ff50-8025-417f-8349-bb7b79b04441-kube-api-access-7bjfq\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906112 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-trusted-ca-bundle\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906285 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" event={"ID":"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106","Type":"ContainerStarted","Data":"43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906553 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgnm4\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-kube-api-access-cgnm4\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906676 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-metrics-certs\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906777 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915db808-f0c4-4b81-aaac-8dbfd3a5b201-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.906969 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.909528 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qns4t\" (UniqueName: \"kubernetes.io/projected/ae97ab9f-b072-4cb2-85da-577097382ed5-kube-api-access-qns4t\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.909583 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxld\" (UniqueName: \"kubernetes.io/projected/f5b47471-c477-482c-8462-62edd00df3bc-kube-api-access-2kxld\") pod \"control-plane-machine-set-operator-78cbb6b69f-ql4q7\" (UID: \"f5b47471-c477-482c-8462-62edd00df3bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.910054 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtf6\" (UniqueName: \"kubernetes.io/projected/7a98873d-4d33-431c-b006-634029aafc31-kube-api-access-5jtf6\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.910093 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a98873d-4d33-431c-b006-634029aafc31-srv-cert\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.910869 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a98873d-4d33-431c-b006-634029aafc31-profile-collector-cert\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.910967 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-config-volume\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.911048 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-config-volume\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.911080 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpstx\" (UniqueName: \"kubernetes.io/projected/47072811-afb6-4585-bf8c-4a0234aa5f1d-kube-api-access-lpstx\") pod \"ingress-canary-rr82d\" (UID: \"47072811-afb6-4585-bf8c-4a0234aa5f1d\") " pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.911336 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m84ss container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.911403 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.913640 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-trusted-ca-bundle\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.913711 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.917521 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e05731e-4ea5-4b63-8b25-946dc11fa091-srv-cert\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.917713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24486a-4d47-4365-8930-f7eabfd033fa-config\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.917799 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915db808-f0c4-4b81-aaac-8dbfd3a5b201-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.918236 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.918604 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b40d986-5c61-48a7-bcf1-89a6d8939870-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.918636 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b40d986-5c61-48a7-bcf1-89a6d8939870-config\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.918685 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88dcd06-e148-4382-945e-8700a7400f00-config\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.930852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-trusted-ca\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931541 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-secret-volume\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931566 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/762ac590-5dba-4663-a225-81765c4ae57a-proxy-tls\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931615 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-csi-data-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931657 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-plugins-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931678 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-serving-cert\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931696 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582a197a-7948-4494-b5dc-cb2c0d014e11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-images\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931746 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931763 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-oauth-config\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931853 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-certificates\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931879 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-default-certificate\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931910 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47043b-7968-40cf-94f6-3c5a91a433a4-serving-cert\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931927 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9a8bfc59-1b02-4a57-8785-146540f864db-certs\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931959 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931977 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-bound-sa-token\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.931994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87qm\" (UniqueName: \"kubernetes.io/projected/f0162698-ec9f-47b8-896d-af15ae62668a-kube-api-access-p87qm\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932020 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-socket-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932073 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-registration-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932089 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9a8bfc59-1b02-4a57-8785-146540f864db-node-bootstrap-token\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932114 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f0162698-ec9f-47b8-896d-af15ae62668a-tmpfs\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932133 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/582a197a-7948-4494-b5dc-cb2c0d014e11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932153 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f24486a-4d47-4365-8930-f7eabfd033fa-serving-cert\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932170 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7hpq\" (UniqueName: \"kubernetes.io/projected/582a197a-7948-4494-b5dc-cb2c0d014e11-kube-api-access-s7hpq\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932210 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b40d986-5c61-48a7-bcf1-89a6d8939870-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932227 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc47043b-7968-40cf-94f6-3c5a91a433a4-config\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932245 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e4a58b-d51f-447d-82e7-a3e4926c08a1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pbr7g\" (UID: \"69e4a58b-d51f-447d-82e7-a3e4926c08a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f24486a-4d47-4365-8930-f7eabfd033fa-trusted-ca\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932300 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b5e171c-1dce-4002-9207-474f3fad14a1-signing-key\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.932327 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b5e171c-1dce-4002-9207-474f3fad14a1-signing-cabundle\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.934334 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-trusted-ca\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.939427 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.942135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-certificates\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.942129 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-serving-cert\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: E1014 14:51:19.942755 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.442736214 +0000 UTC m=+142.029519663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.943563 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-oauth-config\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.964550 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.964968 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-tls\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.966915 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f24486a-4d47-4365-8930-f7eabfd033fa-trusted-ca\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.921203 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24486a-4d47-4365-8930-f7eabfd033fa-config\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.970784 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhh6m\" (UniqueName: \"kubernetes.io/projected/b1b285a3-b917-4698-860d-a00c351727f2-kube-api-access-bhh6m\") pod \"console-f9d7485db-sr5b4\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.970850 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f24486a-4d47-4365-8930-f7eabfd033fa-serving-cert\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.973009 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" event={"ID":"bdb25ff1-18af-4f95-a3e7-09472726d3df","Type":"ContainerStarted","Data":"8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.973072 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" event={"ID":"bdb25ff1-18af-4f95-a3e7-09472726d3df","Type":"ContainerStarted","Data":"f85af2f1d3b9c6ec45960bd9fe68d5d743a7aeef5a35c86982c22ef39ff02d88"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.974381 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.975976 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" event={"ID":"e78f6aa9-6284-4c76-b303-53bdd34b70bf","Type":"ContainerStarted","Data":"5fc50159201868424ef5066bd77770009b8b3b07f0205407693c203114474b53"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.975999 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" event={"ID":"e78f6aa9-6284-4c76-b303-53bdd34b70bf","Type":"ContainerStarted","Data":"7de367fbceaeb16323363d884db8b7d95ae33d4f0d49e254bfa84ea576abf945"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.976607 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl4wg\" (UniqueName: \"kubernetes.io/projected/0f24486a-4d47-4365-8930-f7eabfd033fa-kube-api-access-kl4wg\") pod \"console-operator-58897d9998-b4brk\" (UID: \"0f24486a-4d47-4365-8930-f7eabfd033fa\") " pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.981708 4860 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2dz4s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.981751 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.984510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-bound-sa-token\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.985810 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgnm4\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-kube-api-access-cgnm4\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.987140 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" event={"ID":"5aa947ac-94f3-4582-a725-18082f637305","Type":"ContainerDied","Data":"bad5764e8de6aabc3bff66e8d7785662df1afd62497d40a9a8ea51a5e2bbeef0"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.986810 4860 generic.go:334] "Generic (PLEG): container finished" podID="5aa947ac-94f3-4582-a725-18082f637305" containerID="bad5764e8de6aabc3bff66e8d7785662df1afd62497d40a9a8ea51a5e2bbeef0" exitCode=0 Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.998809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" event={"ID":"1b44ba50-cb4c-4014-90ca-ae91d2875037","Type":"ContainerStarted","Data":"a8c8171714c68459f7fc6150d9d0d323aacdc54e7d628e1d15a2a8ff9327b874"} Oct 14 14:51:19 crc kubenswrapper[4860]: I1014 14:51:19.998863 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" event={"ID":"1b44ba50-cb4c-4014-90ca-ae91d2875037","Type":"ContainerStarted","Data":"6dffe65b1d318636605591210dd5aba1e79579b010b64d736155f1bcf6ae9b0b"} Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.024787 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" event={"ID":"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d","Type":"ContainerStarted","Data":"79a37739d9bddd70591e066f8dee9c1ceb5a47ce7ed0e9ed08f82563a4637745"} Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.024898 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" event={"ID":"cb2e06ea-db08-4cee-a50a-0b5bf7cad13d","Type":"ContainerStarted","Data":"aa25d93f2d2b9e505f176a79f088df251332cc126ac1c61ed9dbcd158cc0de87"} Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.032968 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582a197a-7948-4494-b5dc-cb2c0d014e11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033174 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-images\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033214 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-default-certificate\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87qm\" (UniqueName: \"kubernetes.io/projected/f0162698-ec9f-47b8-896d-af15ae62668a-kube-api-access-p87qm\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033254 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47043b-7968-40cf-94f6-3c5a91a433a4-serving-cert\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033268 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9a8bfc59-1b02-4a57-8785-146540f864db-certs\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033297 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-socket-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033312 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f0162698-ec9f-47b8-896d-af15ae62668a-tmpfs\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033330 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-registration-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033375 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9a8bfc59-1b02-4a57-8785-146540f864db-node-bootstrap-token\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033401 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/582a197a-7948-4494-b5dc-cb2c0d014e11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033419 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7hpq\" (UniqueName: \"kubernetes.io/projected/582a197a-7948-4494-b5dc-cb2c0d014e11-kube-api-access-s7hpq\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033623 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b40d986-5c61-48a7-bcf1-89a6d8939870-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033650 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc47043b-7968-40cf-94f6-3c5a91a433a4-config\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033670 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e4a58b-d51f-447d-82e7-a3e4926c08a1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pbr7g\" (UID: \"69e4a58b-d51f-447d-82e7-a3e4926c08a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b5e171c-1dce-4002-9207-474f3fad14a1-signing-key\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033731 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b5e171c-1dce-4002-9207-474f3fad14a1-signing-cabundle\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033748 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e05731e-4ea5-4b63-8b25-946dc11fa091-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9wl\" (UniqueName: \"kubernetes.io/projected/6e05731e-4ea5-4b63-8b25-946dc11fa091-kube-api-access-9f9wl\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033804 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25zhg\" (UniqueName: \"kubernetes.io/projected/69e4a58b-d51f-447d-82e7-a3e4926c08a1-kube-api-access-25zhg\") pod \"package-server-manager-789f6589d5-pbr7g\" (UID: \"69e4a58b-d51f-447d-82e7-a3e4926c08a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c88dcd06-e148-4382-945e-8700a7400f00-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.033997 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ccaaad20-81e9-4ba2-ab8b-91bbba22f17f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2t9qw\" (UID: \"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034126 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-proxy-tls\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034142 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-mountpoint-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034158 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-stats-auth\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034208 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/762ac590-5dba-4663-a225-81765c4ae57a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034225 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l6hj\" (UniqueName: \"kubernetes.io/projected/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-kube-api-access-4l6hj\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034794 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66vt6\" (UniqueName: \"kubernetes.io/projected/9a8bfc59-1b02-4a57-8785-146540f864db-kube-api-access-66vt6\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034813 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2726\" (UniqueName: \"kubernetes.io/projected/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-kube-api-access-x2726\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034848 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88dcd06-e148-4382-945e-8700a7400f00-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034866 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034897 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0162698-ec9f-47b8-896d-af15ae62668a-webhook-cert\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034921 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wg9\" (UniqueName: \"kubernetes.io/projected/7415bf9f-2145-43a1-b6b8-121b39180dbd-kube-api-access-s8wg9\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034938 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx972\" (UniqueName: \"kubernetes.io/projected/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-kube-api-access-bx972\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034953 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzdm7\" (UniqueName: \"kubernetes.io/projected/ccaaad20-81e9-4ba2-ab8b-91bbba22f17f-kube-api-access-vzdm7\") pod \"multus-admission-controller-857f4d67dd-2t9qw\" (UID: \"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034970 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7x55\" (UniqueName: \"kubernetes.io/projected/7b5e171c-1dce-4002-9207-474f3fad14a1-kube-api-access-p7x55\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.034989 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915db808-f0c4-4b81-aaac-8dbfd3a5b201-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbbdj\" (UniqueName: \"kubernetes.io/projected/762ac590-5dba-4663-a225-81765c4ae57a-kube-api-access-bbbdj\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035043 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpl5\" (UniqueName: \"kubernetes.io/projected/eef02ff0-0b8a-4fd2-8ee5-162644e9f38c-kube-api-access-vrpl5\") pod \"migrator-59844c95c7-cbpmm\" (UID: \"eef02ff0-0b8a-4fd2-8ee5-162644e9f38c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035059 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47072811-afb6-4585-bf8c-4a0234aa5f1d-cert\") pod \"ingress-canary-rr82d\" (UID: \"47072811-afb6-4585-bf8c-4a0234aa5f1d\") " pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nggpj\" (UniqueName: \"kubernetes.io/projected/bc47043b-7968-40cf-94f6-3c5a91a433a4-kube-api-access-nggpj\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035092 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae97ab9f-b072-4cb2-85da-577097382ed5-service-ca-bundle\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035107 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0162698-ec9f-47b8-896d-af15ae62668a-apiservice-cert\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035124 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5b47471-c477-482c-8462-62edd00df3bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ql4q7\" (UID: \"f5b47471-c477-482c-8462-62edd00df3bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035143 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-metrics-tls\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035164 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bjfq\" (UniqueName: \"kubernetes.io/projected/91f0ff50-8025-417f-8349-bb7b79b04441-kube-api-access-7bjfq\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035199 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-metrics-certs\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qns4t\" (UniqueName: \"kubernetes.io/projected/ae97ab9f-b072-4cb2-85da-577097382ed5-kube-api-access-qns4t\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035232 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kxld\" (UniqueName: \"kubernetes.io/projected/f5b47471-c477-482c-8462-62edd00df3bc-kube-api-access-2kxld\") pod \"control-plane-machine-set-operator-78cbb6b69f-ql4q7\" (UID: \"f5b47471-c477-482c-8462-62edd00df3bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035249 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtf6\" (UniqueName: \"kubernetes.io/projected/7a98873d-4d33-431c-b006-634029aafc31-kube-api-access-5jtf6\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035263 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915db808-f0c4-4b81-aaac-8dbfd3a5b201-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035283 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035297 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a98873d-4d33-431c-b006-634029aafc31-srv-cert\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035335 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a98873d-4d33-431c-b006-634029aafc31-profile-collector-cert\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035351 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-config-volume\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035367 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-config-volume\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035385 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpstx\" (UniqueName: \"kubernetes.io/projected/47072811-afb6-4585-bf8c-4a0234aa5f1d-kube-api-access-lpstx\") pod \"ingress-canary-rr82d\" (UID: \"47072811-afb6-4585-bf8c-4a0234aa5f1d\") " pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035417 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e05731e-4ea5-4b63-8b25-946dc11fa091-srv-cert\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035453 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915db808-f0c4-4b81-aaac-8dbfd3a5b201-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035470 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035507 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b40d986-5c61-48a7-bcf1-89a6d8939870-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f0162698-ec9f-47b8-896d-af15ae62668a-tmpfs\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.036248 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b40d986-5c61-48a7-bcf1-89a6d8939870-config\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.036274 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.536251539 +0000 UTC m=+142.123034978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.037628 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582a197a-7948-4494-b5dc-cb2c0d014e11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.039594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-socket-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.040258 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-images\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.040967 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47043b-7968-40cf-94f6-3c5a91a433a4-serving-cert\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.035532 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b40d986-5c61-48a7-bcf1-89a6d8939870-config\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.041063 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88dcd06-e148-4382-945e-8700a7400f00-config\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.041106 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/762ac590-5dba-4663-a225-81765c4ae57a-proxy-tls\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.041129 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-secret-volume\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.041152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-csi-data-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.041204 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-plugins-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.043673 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88dcd06-e148-4382-945e-8700a7400f00-config\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.045602 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-registration-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.046741 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/762ac590-5dba-4663-a225-81765c4ae57a-proxy-tls\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.047214 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc47043b-7968-40cf-94f6-3c5a91a433a4-config\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.050259 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-secret-volume\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.050421 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-csi-data-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.051506 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-plugins-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.052005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c88dcd06-e148-4382-945e-8700a7400f00-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.059086 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.060850 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0162698-ec9f-47b8-896d-af15ae62668a-apiservice-cert\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.061614 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b5e171c-1dce-4002-9207-474f3fad14a1-signing-key\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.062275 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b5e171c-1dce-4002-9207-474f3fad14a1-signing-cabundle\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.062793 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e05731e-4ea5-4b63-8b25-946dc11fa091-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.065009 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915db808-f0c4-4b81-aaac-8dbfd3a5b201-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.065227 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7415bf9f-2145-43a1-b6b8-121b39180dbd-mountpoint-dir\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.066940 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9a8bfc59-1b02-4a57-8785-146540f864db-certs\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.067302 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae97ab9f-b072-4cb2-85da-577097382ed5-service-ca-bundle\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.067958 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e4a58b-d51f-447d-82e7-a3e4926c08a1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pbr7g\" (UID: \"69e4a58b-d51f-447d-82e7-a3e4926c08a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.074219 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.077006 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-config-volume\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.079302 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.079508 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5b47471-c477-482c-8462-62edd00df3bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ql4q7\" (UID: \"f5b47471-c477-482c-8462-62edd00df3bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.079565 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-config-volume\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.080792 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47072811-afb6-4585-bf8c-4a0234aa5f1d-cert\") pod \"ingress-canary-rr82d\" (UID: \"47072811-afb6-4585-bf8c-4a0234aa5f1d\") " pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.081612 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-proxy-tls\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.081743 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-metrics-tls\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.082199 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/762ac590-5dba-4663-a225-81765c4ae57a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.082677 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0162698-ec9f-47b8-896d-af15ae62668a-webhook-cert\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.083002 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a98873d-4d33-431c-b006-634029aafc31-profile-collector-cert\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.083306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-metrics-certs\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.083734 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-stats-auth\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.085969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/582a197a-7948-4494-b5dc-cb2c0d014e11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.087852 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9a8bfc59-1b02-4a57-8785-146540f864db-node-bootstrap-token\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.088945 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ae97ab9f-b072-4cb2-85da-577097382ed5-default-certificate\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.089452 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ccaaad20-81e9-4ba2-ab8b-91bbba22f17f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2t9qw\" (UID: \"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.090365 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a98873d-4d33-431c-b006-634029aafc31-srv-cert\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.094227 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9wl\" (UniqueName: \"kubernetes.io/projected/6e05731e-4ea5-4b63-8b25-946dc11fa091-kube-api-access-9f9wl\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.097313 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e05731e-4ea5-4b63-8b25-946dc11fa091-srv-cert\") pod \"olm-operator-6b444d44fb-tfj6s\" (UID: \"6e05731e-4ea5-4b63-8b25-946dc11fa091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: W1014 14:51:20.099385 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7656db5_f224_4bf0_baea_63eefd6ad8f2.slice/crio-b0a0383aee49833f73a8e0ecda951e55aae4b37f8f39dc067aa1e6ac1d3d73da WatchSource:0}: Error finding container b0a0383aee49833f73a8e0ecda951e55aae4b37f8f39dc067aa1e6ac1d3d73da: Status 404 returned error can't find the container with id b0a0383aee49833f73a8e0ecda951e55aae4b37f8f39dc067aa1e6ac1d3d73da Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.104899 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b40d986-5c61-48a7-bcf1-89a6d8939870-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.108556 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25zhg\" (UniqueName: \"kubernetes.io/projected/69e4a58b-d51f-447d-82e7-a3e4926c08a1-kube-api-access-25zhg\") pod \"package-server-manager-789f6589d5-pbr7g\" (UID: \"69e4a58b-d51f-447d-82e7-a3e4926c08a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.110195 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.117156 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915db808-f0c4-4b81-aaac-8dbfd3a5b201-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.123890 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.131366 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87qm\" (UniqueName: \"kubernetes.io/projected/f0162698-ec9f-47b8-896d-af15ae62668a-kube-api-access-p87qm\") pod \"packageserver-d55dfcdfc-xtchg\" (UID: \"f0162698-ec9f-47b8-896d-af15ae62668a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.134238 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.142591 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.144412 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7hpq\" (UniqueName: \"kubernetes.io/projected/582a197a-7948-4494-b5dc-cb2c0d014e11-kube-api-access-s7hpq\") pod \"kube-storage-version-migrator-operator-b67b599dd-64chq\" (UID: \"582a197a-7948-4494-b5dc-cb2c0d014e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.145472 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.645454473 +0000 UTC m=+142.232237922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.162465 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mslb8"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.162504 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bvxsd"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.163481 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.185073 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bjfq\" (UniqueName: \"kubernetes.io/projected/91f0ff50-8025-417f-8349-bb7b79b04441-kube-api-access-7bjfq\") pod \"marketplace-operator-79b997595-fml8s\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.190573 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b40d986-5c61-48a7-bcf1-89a6d8939870-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qztlr\" (UID: \"4b40d986-5c61-48a7-bcf1-89a6d8939870\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.199946 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wg9\" (UniqueName: \"kubernetes.io/projected/7415bf9f-2145-43a1-b6b8-121b39180dbd-kube-api-access-s8wg9\") pod \"csi-hostpathplugin-srxmc\" (UID: \"7415bf9f-2145-43a1-b6b8-121b39180dbd\") " pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.231438 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx972\" (UniqueName: \"kubernetes.io/projected/c0397d16-c623-4dd4-8b8d-b974bbd1e9db-kube-api-access-bx972\") pod \"dns-default-vnc8p\" (UID: \"c0397d16-c623-4dd4-8b8d-b974bbd1e9db\") " pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.240669 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.243710 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzdm7\" (UniqueName: \"kubernetes.io/projected/ccaaad20-81e9-4ba2-ab8b-91bbba22f17f-kube-api-access-vzdm7\") pod \"multus-admission-controller-857f4d67dd-2t9qw\" (UID: \"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.247180 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.247900 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.747882265 +0000 UTC m=+142.334665714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.270981 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7x55\" (UniqueName: \"kubernetes.io/projected/7b5e171c-1dce-4002-9207-474f3fad14a1-kube-api-access-p7x55\") pod \"service-ca-9c57cc56f-xjwnb\" (UID: \"7b5e171c-1dce-4002-9207-474f3fad14a1\") " pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.288005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbbdj\" (UniqueName: \"kubernetes.io/projected/762ac590-5dba-4663-a225-81765c4ae57a-kube-api-access-bbbdj\") pod \"machine-config-controller-84d6567774-z65gp\" (UID: \"762ac590-5dba-4663-a225-81765c4ae57a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.298690 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.307896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpl5\" (UniqueName: \"kubernetes.io/projected/eef02ff0-0b8a-4fd2-8ee5-162644e9f38c-kube-api-access-vrpl5\") pod \"migrator-59844c95c7-cbpmm\" (UID: \"eef02ff0-0b8a-4fd2-8ee5-162644e9f38c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.310934 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.327987 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nggpj\" (UniqueName: \"kubernetes.io/projected/bc47043b-7968-40cf-94f6-3c5a91a433a4-kube-api-access-nggpj\") pod \"service-ca-operator-777779d784-bnsh8\" (UID: \"bc47043b-7968-40cf-94f6-3c5a91a433a4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.335810 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.340840 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.350221 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.350700 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.351006 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.850992922 +0000 UTC m=+142.437776371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.354321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kxld\" (UniqueName: \"kubernetes.io/projected/f5b47471-c477-482c-8462-62edd00df3bc-kube-api-access-2kxld\") pod \"control-plane-machine-set-operator-78cbb6b69f-ql4q7\" (UID: \"f5b47471-c477-482c-8462-62edd00df3bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.359218 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.366935 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qns4t\" (UniqueName: \"kubernetes.io/projected/ae97ab9f-b072-4cb2-85da-577097382ed5-kube-api-access-qns4t\") pod \"router-default-5444994796-cv25g\" (UID: \"ae97ab9f-b072-4cb2-85da-577097382ed5\") " pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.368938 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.384126 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.391534 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915db808-f0c4-4b81-aaac-8dbfd3a5b201-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gbwlx\" (UID: \"915db808-f0c4-4b81-aaac-8dbfd3a5b201\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.403457 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.406730 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c7tw5"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.411673 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l6hj\" (UniqueName: \"kubernetes.io/projected/8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed-kube-api-access-4l6hj\") pod \"machine-config-operator-74547568cd-rjc7c\" (UID: \"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.417611 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.418069 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-smx67"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.419518 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.424048 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66vt6\" (UniqueName: \"kubernetes.io/projected/9a8bfc59-1b02-4a57-8785-146540f864db-kube-api-access-66vt6\") pod \"machine-config-server-pn2jl\" (UID: \"9a8bfc59-1b02-4a57-8785-146540f864db\") " pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.436400 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:20 crc kubenswrapper[4860]: W1014 14:51:20.459465 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab78dde3_9209_4eb9_9d32_bf7d9ecc6d1b.slice/crio-9bbe6e89a485e9669d023c3648bb6a7cf120e5df5223c3583be8422d58f93bdc WatchSource:0}: Error finding container 9bbe6e89a485e9669d023c3648bb6a7cf120e5df5223c3583be8422d58f93bdc: Status 404 returned error can't find the container with id 9bbe6e89a485e9669d023c3648bb6a7cf120e5df5223c3583be8422d58f93bdc Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.460902 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2726\" (UniqueName: \"kubernetes.io/projected/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-kube-api-access-x2726\") pod \"collect-profiles-29340885-ldcfp\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.468217 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88dcd06-e148-4382-945e-8700a7400f00-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2sf69\" (UID: \"c88dcd06-e148-4382-945e-8700a7400f00\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.468601 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.468720 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:20.968704851 +0000 UTC m=+142.555488290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.468852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.495821 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.501805 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpstx\" (UniqueName: \"kubernetes.io/projected/47072811-afb6-4585-bf8c-4a0234aa5f1d-kube-api-access-lpstx\") pod \"ingress-canary-rr82d\" (UID: \"47072811-afb6-4585-bf8c-4a0234aa5f1d\") " pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.502011 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.001991734 +0000 UTC m=+142.588775183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.507313 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pn2jl" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.507591 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.534573 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtf6\" (UniqueName: \"kubernetes.io/projected/7a98873d-4d33-431c-b006-634029aafc31-kube-api-access-5jtf6\") pod \"catalog-operator-68c6474976-fwhnr\" (UID: \"7a98873d-4d33-431c-b006-634029aafc31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.569121 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.569671 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.069646446 +0000 UTC m=+142.656429885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.569764 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.570436 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.070425046 +0000 UTC m=+142.657208495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.603707 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.621920 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.624770 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.675298 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.675983 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.676122 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.176095134 +0000 UTC m=+142.762878583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.695645 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.721048 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b4brk"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.724057 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rr82d" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.740189 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.779962 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.781803 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.782198 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.282187774 +0000 UTC m=+142.868971213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.883049 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.883176 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.38315894 +0000 UTC m=+142.969942389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.883993 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.885615 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.385599759 +0000 UTC m=+142.972383208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.886592 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2t9qw"] Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.940999 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" podStartSLOduration=122.940979225 podStartE2EDuration="2m2.940979225s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:20.904807012 +0000 UTC m=+142.491590461" watchObservedRunningTime="2025-10-14 14:51:20.940979225 +0000 UTC m=+142.527762674" Oct 14 14:51:20 crc kubenswrapper[4860]: I1014 14:51:20.993551 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:20 crc kubenswrapper[4860]: E1014 14:51:20.993816 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.493801059 +0000 UTC m=+143.080584508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.013702 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sr5b4"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.033646 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fml8s"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.055087 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" event={"ID":"34a37609-7fba-4b24-93ab-36d55d11dfe8","Type":"ContainerStarted","Data":"93a74c7bf9b3c4ef1a941818b38a5d7085fbb9a63ced673b191b924ae187afa0"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" event={"ID":"41466bd8-8531-4987-b1b6-ef965ebe180a","Type":"ContainerStarted","Data":"1ce9693101f9e3365dca01c0ff9c04fbc9d8f5146c6da386e5b7604aff3d60b0"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078827 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078840 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" event={"ID":"5aa947ac-94f3-4582-a725-18082f637305","Type":"ContainerStarted","Data":"17ca6c321a986d5131388b0db26d0357a86b59205a07893fbfc259f655cf6710"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078850 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" event={"ID":"4ccd794b-61d7-4a05-a75a-c6ef83877769","Type":"ContainerStarted","Data":"74a6e95af67b47a8f4c63f5fbd87ff48ae3650ef3b332ba8f0d1d037ae98a9ad"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078862 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b4brk" event={"ID":"0f24486a-4d47-4365-8930-f7eabfd033fa","Type":"ContainerStarted","Data":"336f2753c594be0d54982c2e700f9a9b33fb2b2f096e2de312d265ec898f5911"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078877 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" event={"ID":"16ad23c1-8e88-4556-85ce-0eca934160f9","Type":"ContainerStarted","Data":"a16e4c2cf0fa0cd0e54aaa0abc9ddcff1b7b4890f5128ce516d4809cca9f9428"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078887 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" event={"ID":"6e05731e-4ea5-4b63-8b25-946dc11fa091","Type":"ContainerStarted","Data":"e3ac7d5e867bc8cdf8aae6784b6882ec640e98b304feeb97125bdc841b0e9b7c"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078942 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" event={"ID":"62e3653a-9388-4335-820e-89652ddadba0","Type":"ContainerStarted","Data":"eec1e2fdc2bc618500e6116039405d49cd28547d9c7c4611a0afb4064fd1215c"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.078963 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" event={"ID":"62e3653a-9388-4335-820e-89652ddadba0","Type":"ContainerStarted","Data":"dc4659520621fd98cac51b4848d47a10423d73fd5b21d1311fe231b8ba539b91"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.090704 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" event={"ID":"8e925912-cc05-4c2b-8de7-ba05cd298123","Type":"ContainerStarted","Data":"c93464e688fa8b2d86b9fe5ee6e8695b91dc761cd626bd17efe2a1aad179234d"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.101990 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.103826 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.603814673 +0000 UTC m=+143.190598122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.104970 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" event={"ID":"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b","Type":"ContainerStarted","Data":"9bbe6e89a485e9669d023c3648bb6a7cf120e5df5223c3583be8422d58f93bdc"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.132834 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.178091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bvxsd" event={"ID":"6224e386-928b-4d64-a7f5-d43fb86e4b3a","Type":"ContainerStarted","Data":"fb055d3ac7b35bf8a414d577aec4edb7dc68e4f60c84d6696a3de57c4187214b"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.178144 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bvxsd" event={"ID":"6224e386-928b-4d64-a7f5-d43fb86e4b3a","Type":"ContainerStarted","Data":"8cf85bf30ec2f7dea241c41b3c5da8b5c58b97f9f748360c44e342f76ad8a8c2"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.179208 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.182536 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" event={"ID":"d7656db5-f224-4bf0-baea-63eefd6ad8f2","Type":"ContainerStarted","Data":"9874df9ec301ac2111804997cc50ca4590b7a0bd86926564d651b5ef418c8580"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.182584 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" event={"ID":"d7656db5-f224-4bf0-baea-63eefd6ad8f2","Type":"ContainerStarted","Data":"b0a0383aee49833f73a8e0ecda951e55aae4b37f8f39dc067aa1e6ac1d3d73da"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.183796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" event={"ID":"d41c5f7d-d392-418f-af58-6f69862c74ea","Type":"ContainerStarted","Data":"cbf817cb3cd597d7008b060e0aa7de59c5ca7cc94d75317fe3456005fac81596"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.183815 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" event={"ID":"d41c5f7d-d392-418f-af58-6f69862c74ea","Type":"ContainerStarted","Data":"44ede999f8d4e942d012594b05e8d052e1254490091bcd25c15a5a99373b95d8"} Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.186158 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m84ss container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.186192 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.186516 4860 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2dz4s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.186537 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.186605 4860 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5xlzj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.186622 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.202214 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.202268 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.203612 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-74th4" podStartSLOduration=123.20359861 podStartE2EDuration="2m3.20359861s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:21.202545954 +0000 UTC m=+142.789329403" watchObservedRunningTime="2025-10-14 14:51:21.20359861 +0000 UTC m=+142.790382059" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.204198 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.205987 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.705971377 +0000 UTC m=+143.292754826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.307422 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.309298 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.80928357 +0000 UTC m=+143.396067019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.350364 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.373122 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.379110 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.408367 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.408935 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:21.908917863 +0000 UTC m=+143.495701312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.505947 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" podStartSLOduration=122.505930993 podStartE2EDuration="2m2.505930993s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:21.504227442 +0000 UTC m=+143.091010891" watchObservedRunningTime="2025-10-14 14:51:21.505930993 +0000 UTC m=+143.092714442" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.510373 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.510908 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.010893743 +0000 UTC m=+143.597677192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.608753 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.612920 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.613218 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.113204461 +0000 UTC m=+143.699987910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.620316 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf94b" podStartSLOduration=123.620302542 podStartE2EDuration="2m3.620302542s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:21.618406106 +0000 UTC m=+143.205189555" watchObservedRunningTime="2025-10-14 14:51:21.620302542 +0000 UTC m=+143.207085991" Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.674477 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.713891 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.714186 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.214174637 +0000 UTC m=+143.800958086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.714523 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.727225 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-srxmc"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.804357 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.817855 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vnc8p"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.820060 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.820360 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.320345078 +0000 UTC m=+143.907128527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.887793 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xjwnb"] Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.921949 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:21 crc kubenswrapper[4860]: E1014 14:51:21.922288 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.422273336 +0000 UTC m=+144.009056785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:21 crc kubenswrapper[4860]: I1014 14:51:21.940897 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr"] Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.033479 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.033789 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.533773046 +0000 UTC m=+144.120556495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.107582 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" podStartSLOduration=123.107566566 podStartE2EDuration="2m3.107566566s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.058213656 +0000 UTC m=+143.644997105" watchObservedRunningTime="2025-10-14 14:51:22.107566566 +0000 UTC m=+143.694350015" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.134885 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.135334 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.635322566 +0000 UTC m=+144.222106015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.139133 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2j8s" podStartSLOduration=124.139120047 podStartE2EDuration="2m4.139120047s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.109195845 +0000 UTC m=+143.695979314" watchObservedRunningTime="2025-10-14 14:51:22.139120047 +0000 UTC m=+143.725903496" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.189305 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pn2jl" event={"ID":"9a8bfc59-1b02-4a57-8785-146540f864db","Type":"ContainerStarted","Data":"ab6cbd8ecf5085d9f360ffbbe9d102c0d045bdc2365c85372e98fffb2d521467"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.191720 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" event={"ID":"6e05731e-4ea5-4b63-8b25-946dc11fa091","Type":"ContainerStarted","Data":"c9d2476ae0de48e72e5a5e3b12faccb521dc02cfc7b1a66375058f68dbdb16a6"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.197481 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sr5b4" event={"ID":"b1b285a3-b917-4698-860d-a00c351727f2","Type":"ContainerStarted","Data":"6c8c8489645c8daff9f497f6bc6f1668837929d1480c2d85d467fa2f5b92e280"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.198293 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vnc8p" event={"ID":"c0397d16-c623-4dd4-8b8d-b974bbd1e9db","Type":"ContainerStarted","Data":"32e9b1127d86a15ad71d1af4371a85ed7bf1f3333e9a8f5577576ce1e8bd2abb"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.199231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" event={"ID":"f0162698-ec9f-47b8-896d-af15ae62668a","Type":"ContainerStarted","Data":"9a81a26662173608838ad271c574fad0c6d69785ca33c652cd59e3f727a6e37f"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.200049 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" event={"ID":"bc47043b-7968-40cf-94f6-3c5a91a433a4","Type":"ContainerStarted","Data":"9d43ac122683ad047a21c9eff7579a893403ec3837fbfd37e9fdd10ea8517aaf"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.200849 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" event={"ID":"eef02ff0-0b8a-4fd2-8ee5-162644e9f38c","Type":"ContainerStarted","Data":"a05b36a88d30ff19c863af2cdb74e1f12999c57f87b7e02ba8fbe58ab98d5566"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.201592 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" event={"ID":"91f0ff50-8025-417f-8349-bb7b79b04441","Type":"ContainerStarted","Data":"ea13c0b7b22c724c00adab3ab431c8f8fbdf9fee706dd30372e3c288f72e7390"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.202484 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" event={"ID":"7415bf9f-2145-43a1-b6b8-121b39180dbd","Type":"ContainerStarted","Data":"00d41d7a49b39791da03b6ad29082ae02a87e83e9939a957cb3549f96e4d160b"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.203247 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" event={"ID":"69e4a58b-d51f-447d-82e7-a3e4926c08a1","Type":"ContainerStarted","Data":"8600618e6ffbb70cd181c78a429f80e2ef30a35d0390c0fc6c2a442e88905b97"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.204159 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" event={"ID":"582a197a-7948-4494-b5dc-cb2c0d014e11","Type":"ContainerStarted","Data":"fae3cacbefc47771899f8671facd943ba45f8263e73780047d25a96d4ba5bb7b"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.204870 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" event={"ID":"7a98873d-4d33-431c-b006-634029aafc31","Type":"ContainerStarted","Data":"46c7fd6da67fb0e24b914159d070d94dcafa8b2f9e5ac2ad0a2bf92bb9811d16"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.205653 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cv25g" event={"ID":"ae97ab9f-b072-4cb2-85da-577097382ed5","Type":"ContainerStarted","Data":"58e61d0a9324f76cd0c1df8d4c0c3981ca0abad9f29fde6ae514014b9365afc5"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.206835 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" event={"ID":"41466bd8-8531-4987-b1b6-ef965ebe180a","Type":"ContainerStarted","Data":"ccce1bd54ebb5ee4e7a3ebe0a0b8fdda13c794eeed9228cc1723e194ffa01589"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.207649 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" event={"ID":"4b40d986-5c61-48a7-bcf1-89a6d8939870","Type":"ContainerStarted","Data":"9fef2002f738e86a276993931ef3f9357cf641d95743805a3f03435f67d48d01"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.208771 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" event={"ID":"f5b47471-c477-482c-8462-62edd00df3bc","Type":"ContainerStarted","Data":"daff288e815147ec2a7ee1c64df2f5f103006e9c1a113c47f749e4f2c48b4f2b"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.209627 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" event={"ID":"762ac590-5dba-4663-a225-81765c4ae57a","Type":"ContainerStarted","Data":"6df7ce6491e8ed0b41525701bf763d106f01ef83270805782ca971a41b7c05fe"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.212013 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" event={"ID":"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f","Type":"ContainerStarted","Data":"55148fd941ef7c0e692413d4e959f214a705b3889d1db1f2205451d9922d6732"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.214560 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" event={"ID":"7b5e171c-1dce-4002-9207-474f3fad14a1","Type":"ContainerStarted","Data":"4ed03be75305489d5163c95f5995fa43244d3dfae9b028b5498ed094fe688103"} Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.215585 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.215700 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.216359 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m84ss container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.216733 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.219235 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" podStartSLOduration=123.21922007 podStartE2EDuration="2m3.21922007s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.218085183 +0000 UTC m=+143.804868622" watchObservedRunningTime="2025-10-14 14:51:22.21922007 +0000 UTC m=+143.806003519" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.237649 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.238049 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.738002343 +0000 UTC m=+144.324785792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.248448 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp"] Oct 14 14:51:22 crc kubenswrapper[4860]: W1014 14:51:22.249888 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd546c2_8f3f_459f_bd94_75f8d755d9e5.slice/crio-1add0fdbb61e5ef29e4f6a0897ed013bd40c561867ba7369cb7805bdb77c0f89 WatchSource:0}: Error finding container 1add0fdbb61e5ef29e4f6a0897ed013bd40c561867ba7369cb7805bdb77c0f89: Status 404 returned error can't find the container with id 1add0fdbb61e5ef29e4f6a0897ed013bd40c561867ba7369cb7805bdb77c0f89 Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.287556 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c"] Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.295432 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69"] Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.309354 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.338678 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jnwqb" podStartSLOduration=123.338663121 podStartE2EDuration="2m3.338663121s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.337416981 +0000 UTC m=+143.924200430" watchObservedRunningTime="2025-10-14 14:51:22.338663121 +0000 UTC m=+143.925446570" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.339572 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.342300 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.842288048 +0000 UTC m=+144.429071487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.368175 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx"] Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.375530 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rr82d"] Oct 14 14:51:22 crc kubenswrapper[4860]: W1014 14:51:22.427362 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc88dcd06_e148_4382_945e_8700a7400f00.slice/crio-0aa95fb3045738074615b56a9f562abf017054164c8272061cd5669908982cb1 WatchSource:0}: Error finding container 0aa95fb3045738074615b56a9f562abf017054164c8272061cd5669908982cb1: Status 404 returned error can't find the container with id 0aa95fb3045738074615b56a9f562abf017054164c8272061cd5669908982cb1 Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.440751 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.440914 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.940888137 +0000 UTC m=+144.527671586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.441092 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.441456 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:22.94144128 +0000 UTC m=+144.528224729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.468480 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xm46p" podStartSLOduration=123.468457522 podStartE2EDuration="2m3.468457522s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.462535179 +0000 UTC m=+144.049318628" watchObservedRunningTime="2025-10-14 14:51:22.468457522 +0000 UTC m=+144.055240971" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.523578 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-glhbr" podStartSLOduration=123.523560831 podStartE2EDuration="2m3.523560831s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.510987668 +0000 UTC m=+144.097771127" watchObservedRunningTime="2025-10-14 14:51:22.523560831 +0000 UTC m=+144.110344280" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.543186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.543630 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.043616045 +0000 UTC m=+144.630399494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.621523 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ccjhg" podStartSLOduration=124.621490724 podStartE2EDuration="2m4.621490724s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.591407748 +0000 UTC m=+144.178191197" watchObservedRunningTime="2025-10-14 14:51:22.621490724 +0000 UTC m=+144.208274173" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.646198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.649744 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.149717734 +0000 UTC m=+144.736501253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.701272 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" podStartSLOduration=124.701253788 podStartE2EDuration="2m4.701253788s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.661201241 +0000 UTC m=+144.247984690" watchObservedRunningTime="2025-10-14 14:51:22.701253788 +0000 UTC m=+144.288037237" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.703476 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bvxsd" podStartSLOduration=123.703462881 podStartE2EDuration="2m3.703462881s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.700712474 +0000 UTC m=+144.287495923" watchObservedRunningTime="2025-10-14 14:51:22.703462881 +0000 UTC m=+144.290246330" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.752517 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.752819 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.252804821 +0000 UTC m=+144.839588270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.834046 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mslb8" podStartSLOduration=123.83400912 podStartE2EDuration="2m3.83400912s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:22.775346765 +0000 UTC m=+144.362130224" watchObservedRunningTime="2025-10-14 14:51:22.83400912 +0000 UTC m=+144.420792569" Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.860328 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.860688 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.360674174 +0000 UTC m=+144.947457613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:22 crc kubenswrapper[4860]: I1014 14:51:22.964852 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:22 crc kubenswrapper[4860]: E1014 14:51:22.965616 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.465594965 +0000 UTC m=+145.052378414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.067741 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.068151 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.568130888 +0000 UTC m=+145.154914337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.160233 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.160733 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.162340 4860 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-snzz9 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.163589 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" podUID="34a37609-7fba-4b24-93ab-36d55d11dfe8" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.170941 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.171437 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.67142035 +0000 UTC m=+145.258203799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.225728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" event={"ID":"16ad23c1-8e88-4556-85ce-0eca934160f9","Type":"ContainerStarted","Data":"9b3f5401adadfca1e9a9f3cf9d931fd46576f7ba5b1fc82fdacf33a7affb412f"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.228024 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" event={"ID":"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b","Type":"ContainerStarted","Data":"1d4b61f53e157025c5e6b5916d7080bb54ddc02be9b49da4971739042e7c63f7"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.232162 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" event={"ID":"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed","Type":"ContainerStarted","Data":"41754854a6bb8f768663be352cf67ba7b049c0f580b0c2aefb489ca5d6d38c0a"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.234203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rr82d" event={"ID":"47072811-afb6-4585-bf8c-4a0234aa5f1d","Type":"ContainerStarted","Data":"b52eaa412db808ca61500c4c43263b22e02e117322d9dea361e57b142bec2e22"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.236502 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" event={"ID":"4ccd794b-61d7-4a05-a75a-c6ef83877769","Type":"ContainerStarted","Data":"40271ae986cc53c4d73eec2ec2d57d43c3e06fe3f8ba3c284da2602bc573c294"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.244141 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b4brk" event={"ID":"0f24486a-4d47-4365-8930-f7eabfd033fa","Type":"ContainerStarted","Data":"459455b7bb5734f57e084e8cbaa6c789cbab66312d60e095e9f794ccbc727a01"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.244212 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.245849 4860 patch_prober.go:28] interesting pod/console-operator-58897d9998-b4brk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.245896 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b4brk" podUID="0f24486a-4d47-4365-8930-f7eabfd033fa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.248009 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" event={"ID":"0fd546c2-8f3f-459f-bd94-75f8d755d9e5","Type":"ContainerStarted","Data":"1add0fdbb61e5ef29e4f6a0897ed013bd40c561867ba7369cb7805bdb77c0f89"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.258465 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" event={"ID":"915db808-f0c4-4b81-aaac-8dbfd3a5b201","Type":"ContainerStarted","Data":"ed3f5eda712afa479941b18bde51ed05792f1785e7dcef3752bc066e95964518"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.270573 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" event={"ID":"c88dcd06-e148-4382-945e-8700a7400f00","Type":"ContainerStarted","Data":"0aa95fb3045738074615b56a9f562abf017054164c8272061cd5669908982cb1"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.271885 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b4brk" podStartSLOduration=124.271869692 podStartE2EDuration="2m4.271869692s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:23.269542417 +0000 UTC m=+144.856325876" watchObservedRunningTime="2025-10-14 14:51:23.271869692 +0000 UTC m=+144.858653141" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.272217 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.272656 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.772640701 +0000 UTC m=+145.359424150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.273628 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sr5b4" event={"ID":"b1b285a3-b917-4698-860d-a00c351727f2","Type":"ContainerStarted","Data":"ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.280352 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" event={"ID":"91f0ff50-8025-417f-8349-bb7b79b04441","Type":"ContainerStarted","Data":"5d955e0679096f2a789aafc75e490dcad3fbb98d8369dca8e2ce277b051cee20"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.281487 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" event={"ID":"762ac590-5dba-4663-a225-81765c4ae57a","Type":"ContainerStarted","Data":"3bc80d11644e7af9af0847981c068e41f83fc52bf1e3f2fc1e99f62874845d45"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.284681 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cv25g" event={"ID":"ae97ab9f-b072-4cb2-85da-577097382ed5","Type":"ContainerStarted","Data":"e53d94106e2572f6ef61db1720e471f486d3b6d8671f6a7f27d082f624a858cf"} Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.285796 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.285840 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.307947 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" podStartSLOduration=124.307930712 podStartE2EDuration="2m4.307930712s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:23.307312218 +0000 UTC m=+144.894095667" watchObservedRunningTime="2025-10-14 14:51:23.307930712 +0000 UTC m=+144.894714161" Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.372859 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.373805 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.873779101 +0000 UTC m=+145.460562550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.476001 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.476363 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:23.976347085 +0000 UTC m=+145.563130534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.578682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.578785 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.078766336 +0000 UTC m=+145.665549795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.579290 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.579570 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.079560225 +0000 UTC m=+145.666343674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.680168 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.680355 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.180327016 +0000 UTC m=+145.767110465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.680676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.681126 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.181106375 +0000 UTC m=+145.767889824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.781185 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.781375 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.281347963 +0000 UTC m=+145.868131412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.781443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.781757 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.281749692 +0000 UTC m=+145.868533141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.882934 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.883371 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.383358813 +0000 UTC m=+145.970142262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:23 crc kubenswrapper[4860]: I1014 14:51:23.983933 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:23 crc kubenswrapper[4860]: E1014 14:51:23.984221 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.484210516 +0000 UTC m=+146.070993965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.085100 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.085742 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.585725425 +0000 UTC m=+146.172508874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.131822 4860 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jwpzd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.131840 4860 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jwpzd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.131864 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" podUID="5aa947ac-94f3-4582-a725-18082f637305" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.131867 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" podUID="5aa947ac-94f3-4582-a725-18082f637305" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.186748 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.187110 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.68709569 +0000 UTC m=+146.273879139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.287352 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.287496 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.787471482 +0000 UTC m=+146.374254931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.287716 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.288012 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.788004555 +0000 UTC m=+146.374788004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.293106 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" event={"ID":"582a197a-7948-4494-b5dc-cb2c0d014e11","Type":"ContainerStarted","Data":"f3be4a87d14fe785015bf4d3a16c6f5721c240b71ce466a8cfc1972145d4cef8"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.295371 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" event={"ID":"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f","Type":"ContainerStarted","Data":"2880dd25d3f5ca30bc2c4d1e13c32f8e42cb7537e3222cbc7997872202e93d7b"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.295402 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" event={"ID":"ccaaad20-81e9-4ba2-ab8b-91bbba22f17f","Type":"ContainerStarted","Data":"970911ecf2ecf6e68812af681f96afbcb6c2e4ec1d81015cc04aef8476283a95"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.296528 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" event={"ID":"0fd546c2-8f3f-459f-bd94-75f8d755d9e5","Type":"ContainerStarted","Data":"a764fb83a254d629a5b1eaedcb3c26d9d0578f958f0e45462f240854fbcd0c97"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.298482 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" event={"ID":"915db808-f0c4-4b81-aaac-8dbfd3a5b201","Type":"ContainerStarted","Data":"1b9b834b8dab2b0c1239f442235283d919e0ed21274543e94f885a6858ee0879"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.300066 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" event={"ID":"eef02ff0-0b8a-4fd2-8ee5-162644e9f38c","Type":"ContainerStarted","Data":"9cfdc7614aa386b4fab0f83925084f101fbeb00331dc00426f1b69d6c0175221"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.301328 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" event={"ID":"bc47043b-7968-40cf-94f6-3c5a91a433a4","Type":"ContainerStarted","Data":"f71addced4d3cea9d5bbc3342b9557b07f864f4bdb1a561240a7fb5375942da7"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.302957 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rr82d" event={"ID":"47072811-afb6-4585-bf8c-4a0234aa5f1d","Type":"ContainerStarted","Data":"4105fe1a6336294875dc04ccaa5a933b08c6eab51845384ea8ee1bbc8d8b94e3"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.304954 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" event={"ID":"ab78dde3-9209-4eb9-9d32-bf7d9ecc6d1b","Type":"ContainerStarted","Data":"2c509686679deed0302a107bc06a24d18a23a2879d97e5523a1bdd1aedaeca69"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.306759 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vnc8p" event={"ID":"c0397d16-c623-4dd4-8b8d-b974bbd1e9db","Type":"ContainerStarted","Data":"09257102c282a33e645512d78bf61654659f8e2abbef2f398ecfec1cb982e268"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.308442 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" event={"ID":"7b5e171c-1dce-4002-9207-474f3fad14a1","Type":"ContainerStarted","Data":"9382713a4c7b98f5530b18c05931a5346d0e1ce2c4eee3ed7f3d4586012c0ec9"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.311807 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" event={"ID":"762ac590-5dba-4663-a225-81765c4ae57a","Type":"ContainerStarted","Data":"9a57a469d31501a8ab5955f2162ad1e8d7db4d0faf6fe092aa356c90d0163dd7"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.313183 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-64chq" podStartSLOduration=125.313163431 podStartE2EDuration="2m5.313163431s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.310673811 +0000 UTC m=+145.897457280" watchObservedRunningTime="2025-10-14 14:51:24.313163431 +0000 UTC m=+145.899946880" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.313946 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" event={"ID":"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed","Type":"ContainerStarted","Data":"ab08c44eb7c29a35d72f3fa7da0e0a322c617413997503849b460326e924f8fc"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.320894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" event={"ID":"f5b47471-c477-482c-8462-62edd00df3bc","Type":"ContainerStarted","Data":"e1a97b50fdcfc6ed72d3e9b63484d1439c11d69b8a279012f7fade5e2661f674"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.331864 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rr82d" podStartSLOduration=7.331846342 podStartE2EDuration="7.331846342s" podCreationTimestamp="2025-10-14 14:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.329514776 +0000 UTC m=+145.916298215" watchObservedRunningTime="2025-10-14 14:51:24.331846342 +0000 UTC m=+145.918629781" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.350945 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" event={"ID":"f0162698-ec9f-47b8-896d-af15ae62668a","Type":"ContainerStarted","Data":"80e77e02cdf6ba3b16f47ebf197fdbb56644e710b70b72666ad11eae4fc4e244"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.351594 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.353348 4860 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xtchg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.353405 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" podUID="f0162698-ec9f-47b8-896d-af15ae62668a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.364721 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pn2jl" event={"ID":"9a8bfc59-1b02-4a57-8785-146540f864db","Type":"ContainerStarted","Data":"83376828dd5a13cd43676b670ab3567106c8854edd6a7c248633d786c3dde6da"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.375633 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bnsh8" podStartSLOduration=125.375619398 podStartE2EDuration="2m5.375619398s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.374284966 +0000 UTC m=+145.961068425" watchObservedRunningTime="2025-10-14 14:51:24.375619398 +0000 UTC m=+145.962402847" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.376011 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" event={"ID":"69e4a58b-d51f-447d-82e7-a3e4926c08a1","Type":"ContainerStarted","Data":"8f7498f14c823392a48c8578b6c89213f3e079450727ddeb8fd4f2e8a754147b"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.376070 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" event={"ID":"69e4a58b-d51f-447d-82e7-a3e4926c08a1","Type":"ContainerStarted","Data":"8b2cf63cd05455ddc23b14d65f30699ae658628ffd57ab4510bf85186294e62a"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.384630 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" event={"ID":"7a98873d-4d33-431c-b006-634029aafc31","Type":"ContainerStarted","Data":"e84aae87d79ad7a6500ad2c0f8058e48f818a88f971830fc621073fee3fa2780"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.385591 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.387662 4860 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fwhnr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.387722 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" podUID="7a98873d-4d33-431c-b006-634029aafc31" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.387668 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" event={"ID":"4b40d986-5c61-48a7-bcf1-89a6d8939870","Type":"ContainerStarted","Data":"2fa90231c2ae6990ae760ab8f572fa6803ac476df033b5c1368518459e963a00"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.388375 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.388512 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.888492279 +0000 UTC m=+146.475275728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.388990 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.390227 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" event={"ID":"c88dcd06-e148-4382-945e-8700a7400f00","Type":"ContainerStarted","Data":"20529cfa8b33f5b993c5871aeee2a57fbf3c15a5c59ced0f988c4647869ecd55"} Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.391007 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.890994199 +0000 UTC m=+146.477777648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.404275 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" event={"ID":"4ccd794b-61d7-4a05-a75a-c6ef83877769","Type":"ContainerStarted","Data":"1e1090857f89fc8d45030f9b4382944b5cc491f932e577a0909ea0bc7c526e6d"} Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.405259 4860 patch_prober.go:28] interesting pod/console-operator-58897d9998-b4brk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.405308 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b4brk" podUID="0f24486a-4d47-4365-8930-f7eabfd033fa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.405564 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.406431 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fml8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.406489 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.437617 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" podStartSLOduration=126.437602033 podStartE2EDuration="2m6.437602033s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.405415777 +0000 UTC m=+145.992199226" watchObservedRunningTime="2025-10-14 14:51:24.437602033 +0000 UTC m=+146.024385482" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.438182 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-c7tw5" podStartSLOduration=125.438178647 podStartE2EDuration="2m5.438178647s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.433227948 +0000 UTC m=+146.020011407" watchObservedRunningTime="2025-10-14 14:51:24.438178647 +0000 UTC m=+146.024962096" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.464919 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gbwlx" podStartSLOduration=125.464901351 podStartE2EDuration="2m5.464901351s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.46360573 +0000 UTC m=+146.050389189" watchObservedRunningTime="2025-10-14 14:51:24.464901351 +0000 UTC m=+146.051684800" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.480150 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-xjwnb" podStartSLOduration=125.480130709 podStartE2EDuration="2m5.480130709s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.476490242 +0000 UTC m=+146.063273691" watchObservedRunningTime="2025-10-14 14:51:24.480130709 +0000 UTC m=+146.066914158" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.489861 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.491321 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:24.991298108 +0000 UTC m=+146.578081617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.530424 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sr5b4" podStartSLOduration=125.530403362 podStartE2EDuration="2m5.530403362s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.50835553 +0000 UTC m=+146.095138999" watchObservedRunningTime="2025-10-14 14:51:24.530403362 +0000 UTC m=+146.117186811" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.531062 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2sf69" podStartSLOduration=125.531056648 podStartE2EDuration="2m5.531056648s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.530098085 +0000 UTC m=+146.116881544" watchObservedRunningTime="2025-10-14 14:51:24.531056648 +0000 UTC m=+146.117840097" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.583542 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" podStartSLOduration=125.583523683 podStartE2EDuration="2m5.583523683s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.578952013 +0000 UTC m=+146.165735462" watchObservedRunningTime="2025-10-14 14:51:24.583523683 +0000 UTC m=+146.170307132" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.583860 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ql4q7" podStartSLOduration=125.583855901 podStartE2EDuration="2m5.583855901s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.55393334 +0000 UTC m=+146.140716789" watchObservedRunningTime="2025-10-14 14:51:24.583855901 +0000 UTC m=+146.170639350" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.592832 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.593174 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.093163546 +0000 UTC m=+146.679946995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.608586 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-cv25g" podStartSLOduration=125.608572888 podStartE2EDuration="2m5.608572888s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.605921593 +0000 UTC m=+146.192705042" watchObservedRunningTime="2025-10-14 14:51:24.608572888 +0000 UTC m=+146.195356327" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.626094 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.628825 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.628875 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.635111 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" podStartSLOduration=125.635093017 podStartE2EDuration="2m5.635093017s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.630366384 +0000 UTC m=+146.217149833" watchObservedRunningTime="2025-10-14 14:51:24.635093017 +0000 UTC m=+146.221876456" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.676443 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z65gp" podStartSLOduration=125.676422184 podStartE2EDuration="2m5.676422184s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.656952405 +0000 UTC m=+146.243735864" watchObservedRunningTime="2025-10-14 14:51:24.676422184 +0000 UTC m=+146.263205643" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.678587 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qztlr" podStartSLOduration=125.678572276 podStartE2EDuration="2m5.678572276s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.675302658 +0000 UTC m=+146.262086107" watchObservedRunningTime="2025-10-14 14:51:24.678572276 +0000 UTC m=+146.265355735" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.693747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.693959 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.193929417 +0000 UTC m=+146.780712906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.694261 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.694547 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.194538471 +0000 UTC m=+146.781321920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.695599 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-smx67" podStartSLOduration=125.695586086 podStartE2EDuration="2m5.695586086s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.693100816 +0000 UTC m=+146.279884295" watchObservedRunningTime="2025-10-14 14:51:24.695586086 +0000 UTC m=+146.282369545" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.720377 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pn2jl" podStartSLOduration=7.7203593040000005 podStartE2EDuration="7.720359304s" podCreationTimestamp="2025-10-14 14:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.720275762 +0000 UTC m=+146.307059231" watchObservedRunningTime="2025-10-14 14:51:24.720359304 +0000 UTC m=+146.307142753" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.758757 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" podStartSLOduration=126.75874227 podStartE2EDuration="2m6.75874227s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.758568806 +0000 UTC m=+146.345352255" watchObservedRunningTime="2025-10-14 14:51:24.75874227 +0000 UTC m=+146.345525709" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.781977 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" podStartSLOduration=125.78195851 podStartE2EDuration="2m5.78195851s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:24.78071911 +0000 UTC m=+146.367502559" watchObservedRunningTime="2025-10-14 14:51:24.78195851 +0000 UTC m=+146.368741969" Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.795637 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.796268 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.296247685 +0000 UTC m=+146.883031144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.897119 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.897518 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.397502147 +0000 UTC m=+146.984285596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.997984 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.998163 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.498134925 +0000 UTC m=+147.084918364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:24 crc kubenswrapper[4860]: I1014 14:51:24.998237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:24 crc kubenswrapper[4860]: E1014 14:51:24.998532 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.498523885 +0000 UTC m=+147.085307334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.098879 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.099109 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.59908099 +0000 UTC m=+147.185864429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.099333 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.099623 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.599611473 +0000 UTC m=+147.186394922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.200529 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.200720 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.700697681 +0000 UTC m=+147.287481120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.201177 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.201456 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.701444569 +0000 UTC m=+147.288228018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.302703 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.302947 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.802915507 +0000 UTC m=+147.389698966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.404679 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.405095 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:25.905079572 +0000 UTC m=+147.491863021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.410331 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" event={"ID":"eef02ff0-0b8a-4fd2-8ee5-162644e9f38c","Type":"ContainerStarted","Data":"6a55fde8e5f2b4f81912c0dd682335e15cc480f8898807eed90ee8f73b58e25a"} Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.412143 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" event={"ID":"8d2c2d08-c96e-40e0-8f5f-b2b34292e6ed","Type":"ContainerStarted","Data":"a92901ce6f9741f431a4ad1e587fd7784488835f54d1f1bb1851d72e9272095f"} Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.414400 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vnc8p" event={"ID":"c0397d16-c623-4dd4-8b8d-b974bbd1e9db","Type":"ContainerStarted","Data":"ff3286a0ead5958db26e6e8e905048687c80e88d0dac80b128601e66cfd2bf91"} Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.415252 4860 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fwhnr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.415320 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" podUID="7a98873d-4d33-431c-b006-634029aafc31" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.415376 4860 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xtchg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.415413 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" podUID="f0162698-ec9f-47b8-896d-af15ae62668a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.415809 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fml8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.415844 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.467487 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t9qw" podStartSLOduration=126.467471016 podStartE2EDuration="2m6.467471016s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:25.464879125 +0000 UTC m=+147.051662574" watchObservedRunningTime="2025-10-14 14:51:25.467471016 +0000 UTC m=+147.054254465" Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.505108 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" podStartSLOduration=126.505089804 podStartE2EDuration="2m6.505089804s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:25.503531746 +0000 UTC m=+147.090315195" watchObservedRunningTime="2025-10-14 14:51:25.505089804 +0000 UTC m=+147.091873243" Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.506163 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.506304 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.006288223 +0000 UTC m=+147.593071672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.506819 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.508190 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.008181989 +0000 UTC m=+147.594965438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.608066 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.608261 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.108236122 +0000 UTC m=+147.695019571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.608307 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.608627 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.108619681 +0000 UTC m=+147.695403130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.627149 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.627208 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.709069 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.709279 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.209244789 +0000 UTC m=+147.796028238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.709431 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.710178 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.210166121 +0000 UTC m=+147.796949570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.810763 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.811430 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.311411993 +0000 UTC m=+147.898195442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:25 crc kubenswrapper[4860]: I1014 14:51:25.912148 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:25 crc kubenswrapper[4860]: E1014 14:51:25.912564 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.412545213 +0000 UTC m=+147.999328732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.013670 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.014091 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.514077282 +0000 UTC m=+148.100860731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.115122 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.115518 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.615500568 +0000 UTC m=+148.202284017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.216797 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.217255 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.717239913 +0000 UTC m=+148.304023362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.318434 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.318743 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.818732421 +0000 UTC m=+148.405515870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.419531 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.419696 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.919671217 +0000 UTC m=+148.506454666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.419778 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.420120 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:26.920111557 +0000 UTC m=+148.506895006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.421569 4860 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fwhnr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.421617 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" podUID="7a98873d-4d33-431c-b006-634029aafc31" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.421895 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.450192 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rjc7c" podStartSLOduration=127.450176152 podStartE2EDuration="2m7.450176152s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:26.446994055 +0000 UTC m=+148.033777504" watchObservedRunningTime="2025-10-14 14:51:26.450176152 +0000 UTC m=+148.036959601" Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.522051 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vnc8p" podStartSLOduration=9.522007565 podStartE2EDuration="9.522007565s" podCreationTimestamp="2025-10-14 14:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:26.480732139 +0000 UTC m=+148.067515588" watchObservedRunningTime="2025-10-14 14:51:26.522007565 +0000 UTC m=+148.108791014" Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.524213 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.524365 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.024348391 +0000 UTC m=+148.611131840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.524506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.524939 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.024924376 +0000 UTC m=+148.611707825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.625626 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.625858 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.12582869 +0000 UTC m=+148.712612139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.626199 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.626533 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.126520996 +0000 UTC m=+148.713304445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.640449 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:26 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:26 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:26 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.640531 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.727103 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.727312 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.227277746 +0000 UTC m=+148.814061205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.727418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.727827 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.22781086 +0000 UTC m=+148.814594309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.828961 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.829310 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.329295107 +0000 UTC m=+148.916078546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:26 crc kubenswrapper[4860]: I1014 14:51:26.930530 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:26 crc kubenswrapper[4860]: E1014 14:51:26.930899 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.430887989 +0000 UTC m=+149.017671438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.031517 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.031648 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.531625069 +0000 UTC m=+149.118408518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.031924 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.031999 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.032036 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.032076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.032100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.032425 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.532409767 +0000 UTC m=+149.119193216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.033411 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.037991 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.038565 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.054523 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.075822 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.083356 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.090685 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.134408 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.147961 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.647934414 +0000 UTC m=+149.234717863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.154015 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwpzd" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.181113 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbpmm" podStartSLOduration=128.181097804 podStartE2EDuration="2m8.181097804s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:26.523372118 +0000 UTC m=+148.110155567" watchObservedRunningTime="2025-10-14 14:51:27.181097804 +0000 UTC m=+148.767881253" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.238055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.238353 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.738342325 +0000 UTC m=+149.325125774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.339210 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.339944 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.839922975 +0000 UTC m=+149.426706424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.432940 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" event={"ID":"7415bf9f-2145-43a1-b6b8-121b39180dbd","Type":"ContainerStarted","Data":"d31e9d1eaafa9c631b785f17a08095c01aa18e7feb0a70c5b6be23d15ccd0dee"} Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.442789 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.443078 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:27.943065483 +0000 UTC m=+149.529848932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.544713 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.545790 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.045768221 +0000 UTC m=+149.632551680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.646350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.646749 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.146733566 +0000 UTC m=+149.733517015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.647054 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:27 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:27 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:27 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.647084 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.747508 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.747796 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.247781834 +0000 UTC m=+149.834565283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.849042 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.849372 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.349355765 +0000 UTC m=+149.936139214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:27 crc kubenswrapper[4860]: I1014 14:51:27.950319 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:27 crc kubenswrapper[4860]: E1014 14:51:27.950651 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.450637367 +0000 UTC m=+150.037420816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.051789 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.052108 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.552096954 +0000 UTC m=+150.138880403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: W1014 14:51:28.124441 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-777a9de34df12687ca9ed34ef7da4f35555b488a7e1e4483912ff6e481700989 WatchSource:0}: Error finding container 777a9de34df12687ca9ed34ef7da4f35555b488a7e1e4483912ff6e481700989: Status 404 returned error can't find the container with id 777a9de34df12687ca9ed34ef7da4f35555b488a7e1e4483912ff6e481700989 Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.152591 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.152922 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.652906336 +0000 UTC m=+150.239689785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.153042 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.153080 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.167878 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:28 crc kubenswrapper[4860]: W1014 14:51:28.179036 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-75e3f7a0c0157829410ae4adb3872fb9d76ec74eab4ec8814b8e900fc75f4878 WatchSource:0}: Error finding container 75e3f7a0c0157829410ae4adb3872fb9d76ec74eab4ec8814b8e900fc75f4878: Status 404 returned error can't find the container with id 75e3f7a0c0157829410ae4adb3872fb9d76ec74eab4ec8814b8e900fc75f4878 Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.180877 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snzz9" Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.254726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.256094 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.756079775 +0000 UTC m=+150.342863224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.356323 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.356423 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.856410006 +0000 UTC m=+150.443193455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.356583 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.356846 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.856839186 +0000 UTC m=+150.443622635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.436834 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"777a9de34df12687ca9ed34ef7da4f35555b488a7e1e4483912ff6e481700989"} Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.438666 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75e3f7a0c0157829410ae4adb3872fb9d76ec74eab4ec8814b8e900fc75f4878"} Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.444195 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.457701 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.457866 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.957821762 +0000 UTC m=+150.544605211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.457939 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.458314 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:28.958302684 +0000 UTC m=+150.545086133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: W1014 14:51:28.462822 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5896b272dfe53cde17b2c6868bbb3f5e61fa6d10500d1a1136ecc417df8b13b7 WatchSource:0}: Error finding container 5896b272dfe53cde17b2c6868bbb3f5e61fa6d10500d1a1136ecc417df8b13b7: Status 404 returned error can't find the container with id 5896b272dfe53cde17b2c6868bbb3f5e61fa6d10500d1a1136ecc417df8b13b7 Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.559200 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.559854 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.059839983 +0000 UTC m=+150.646623432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.631214 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:28 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:28 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:28 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.631526 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.660755 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.661183 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.161170207 +0000 UTC m=+150.747953656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.762152 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.762481 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.262466891 +0000 UTC m=+150.849250340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.775988 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.867983 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.868464 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.368447427 +0000 UTC m=+150.955230876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.968804 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.968922 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.46888974 +0000 UTC m=+151.055673189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:28 crc kubenswrapper[4860]: I1014 14:51:28.969322 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:28 crc kubenswrapper[4860]: E1014 14:51:28.969676 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.469659939 +0000 UTC m=+151.056443388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.073955 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.074512 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.574494448 +0000 UTC m=+151.161277897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.175129 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.175417 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.675406062 +0000 UTC m=+151.262189511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.245550 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.245597 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.276159 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.276581 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.776563862 +0000 UTC m=+151.363347311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.377394 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.377737 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.877725912 +0000 UTC m=+151.464509351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.442339 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7a1cb79db75e8c9bb0c836f5be101b81b9c9f21cf463c1e1de0422b0eb1bb781"} Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.442379 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5896b272dfe53cde17b2c6868bbb3f5e61fa6d10500d1a1136ecc417df8b13b7"} Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.444196 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"afc0931df6cbb0c2578645d0441b40d3adffb9046e50882ec169d5fe57a7121a"} Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.445887 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8d253f43928cede115bf925de60dd870002d770df6b653df5b842d32b676aef8"} Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.478158 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.478836 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:29.978819971 +0000 UTC m=+151.565603420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.579845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.580190 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.080170796 +0000 UTC m=+151.666954245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.593900 4860 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z2m7d container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]log ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]etcd ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 14:51:29 crc kubenswrapper[4860]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/max-in-flight-filter ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 14:51:29 crc kubenswrapper[4860]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 14 14:51:29 crc kubenswrapper[4860]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 14:51:29 crc kubenswrapper[4860]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 14:51:29 crc kubenswrapper[4860]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 14:51:29 crc kubenswrapper[4860]: livez check failed Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.594212 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" podUID="16ad23c1-8e88-4556-85ce-0eca934160f9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.630586 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:29 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:29 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:29 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.630632 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.649351 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.649410 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.649684 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.649700 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.681410 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.681724 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.181710275 +0000 UTC m=+151.768493724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.783255 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.783678 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.283660535 +0000 UTC m=+151.870443984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.884397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.884590 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.384563979 +0000 UTC m=+151.971347428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.884682 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.885094 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.385085902 +0000 UTC m=+151.971869351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:29 crc kubenswrapper[4860]: I1014 14:51:29.986329 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:29 crc kubenswrapper[4860]: E1014 14:51:29.986713 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.486687342 +0000 UTC m=+152.073470791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.030066 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xb28n"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.035223 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.039801 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.050468 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xb28n"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.056704 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.057279 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.060174 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.060427 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.068426 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.088770 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.088850 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-utilities\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.088881 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-catalog-content\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.088929 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvcbh\" (UniqueName: \"kubernetes.io/projected/7872d916-7101-4078-a051-702427c0321f-kube-api-access-bvcbh\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.089308 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.589296327 +0000 UTC m=+152.176079776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.111857 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.132973 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfj6s" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.189990 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.190135 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.190208 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.190233 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-utilities\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.190274 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-catalog-content\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.190298 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvcbh\" (UniqueName: \"kubernetes.io/projected/7872d916-7101-4078-a051-702427c0321f-kube-api-access-bvcbh\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.190627 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.690614112 +0000 UTC m=+152.277397561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.191688 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-catalog-content\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.192395 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-utilities\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.202323 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b4brk" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.226422 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gd4lz"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.227268 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.244488 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.244522 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.244754 4860 patch_prober.go:28] interesting pod/console-f9d7485db-sr5b4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.244853 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sr5b4" podUID="b1b285a3-b917-4698-860d-a00c351727f2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.247303 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.278492 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvcbh\" (UniqueName: \"kubernetes.io/projected/7872d916-7101-4078-a051-702427c0321f-kube-api-access-bvcbh\") pod \"certified-operators-xb28n\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.291754 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-utilities\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.291849 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.291878 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f968n\" (UniqueName: \"kubernetes.io/projected/cf902d5c-75ec-4993-8a0f-2a188b2826e3-kube-api-access-f968n\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.291921 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-catalog-content\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.291957 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.291979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.292345 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.317819 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.81779454 +0000 UTC m=+152.404577989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.322171 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gd4lz"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.354667 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.363841 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.382338 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.387222 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.411314 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.419295 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.420351 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-catalog-content\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.420448 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-utilities\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.420511 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f968n\" (UniqueName: \"kubernetes.io/projected/cf902d5c-75ec-4993-8a0f-2a188b2826e3-kube-api-access-f968n\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.421402 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-utilities\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.421609 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-catalog-content\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.430472 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:30.930442847 +0000 UTC m=+152.517226296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.453106 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kxhlz"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.454011 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.466326 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xtchg" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.471220 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f968n\" (UniqueName: \"kubernetes.io/projected/cf902d5c-75ec-4993-8a0f-2a188b2826e3-kube-api-access-f968n\") pod \"community-operators-gd4lz\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.493896 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxhlz"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.511005 4860 generic.go:334] "Generic (PLEG): container finished" podID="0fd546c2-8f3f-459f-bd94-75f8d755d9e5" containerID="a764fb83a254d629a5b1eaedcb3c26d9d0578f958f0e45462f240854fbcd0c97" exitCode=0 Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.511716 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" event={"ID":"0fd546c2-8f3f-459f-bd94-75f8d755d9e5","Type":"ContainerDied","Data":"a764fb83a254d629a5b1eaedcb3c26d9d0578f958f0e45462f240854fbcd0c97"} Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.511746 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.523080 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fsr\" (UniqueName: \"kubernetes.io/projected/904d68d4-d22d-483b-9fac-9fb2db95898f-kube-api-access-44fsr\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.523132 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-catalog-content\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.523164 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.523260 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-utilities\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.523814 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.023802579 +0000 UTC m=+152.610586028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.542370 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.628751 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.628952 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-utilities\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.629011 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fsr\" (UniqueName: \"kubernetes.io/projected/904d68d4-d22d-483b-9fac-9fb2db95898f-kube-api-access-44fsr\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.629189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-catalog-content\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.630124 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.130105954 +0000 UTC m=+152.716889403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.631225 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-utilities\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.631906 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-catalog-content\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.639125 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.644310 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:30 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:30 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:30 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.644370 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.664934 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fsr\" (UniqueName: \"kubernetes.io/projected/904d68d4-d22d-483b-9fac-9fb2db95898f-kube-api-access-44fsr\") pod \"certified-operators-kxhlz\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.687802 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nfwhp"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.688690 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.708632 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fwhnr" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.730058 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.731169 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.231156041 +0000 UTC m=+152.817939490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.731921 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfwhp"] Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.813244 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.831636 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.832191 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-utilities\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.832240 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-catalog-content\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.832383 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n89x4\" (UniqueName: \"kubernetes.io/projected/cf4ed01a-ec4e-41b6-90be-b246b51da828-kube-api-access-n89x4\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.832506 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.332485685 +0000 UTC m=+152.919269134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.933189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.933237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n89x4\" (UniqueName: \"kubernetes.io/projected/cf4ed01a-ec4e-41b6-90be-b246b51da828-kube-api-access-n89x4\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.933264 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-utilities\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.933290 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-catalog-content\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.933661 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-catalog-content\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: E1014 14:51:30.933897 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.433886772 +0000 UTC m=+153.020670221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.934551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-utilities\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:30 crc kubenswrapper[4860]: I1014 14:51:30.978114 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n89x4\" (UniqueName: \"kubernetes.io/projected/cf4ed01a-ec4e-41b6-90be-b246b51da828-kube-api-access-n89x4\") pod \"community-operators-nfwhp\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.014273 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.034811 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.035197 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.535167025 +0000 UTC m=+153.121950474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.071293 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gd4lz"] Oct 14 14:51:31 crc kubenswrapper[4860]: W1014 14:51:31.083201 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf902d5c_75ec_4993_8a0f_2a188b2826e3.slice/crio-2fb581a7946a97829584f6295bc5aea1fe2a5a51892b8d1781226c377dba2a1a WatchSource:0}: Error finding container 2fb581a7946a97829584f6295bc5aea1fe2a5a51892b8d1781226c377dba2a1a: Status 404 returned error can't find the container with id 2fb581a7946a97829584f6295bc5aea1fe2a5a51892b8d1781226c377dba2a1a Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.086683 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.088706 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.093298 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.093487 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.094186 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.136267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.136342 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552fe4d5-540a-4b65-9105-013cb46c4abc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.136383 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552fe4d5-540a-4b65-9105-013cb46c4abc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.136747 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.636732205 +0000 UTC m=+153.223515664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.190615 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xb28n"] Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.236944 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.237311 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552fe4d5-540a-4b65-9105-013cb46c4abc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.237350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552fe4d5-540a-4b65-9105-013cb46c4abc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.237414 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552fe4d5-540a-4b65-9105-013cb46c4abc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.237475 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.737462424 +0000 UTC m=+153.324245873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.258857 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552fe4d5-540a-4b65-9105-013cb46c4abc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.258891 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.299640 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxhlz"] Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.341561 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.341923 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.841908884 +0000 UTC m=+153.428692333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: W1014 14:51:31.342764 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod904d68d4_d22d_483b_9fac_9fb2db95898f.slice/crio-81c2c1565f0e957d971e2c57cb0ee72d4cea18027f4b7d665a3fae83061f81e7 WatchSource:0}: Error finding container 81c2c1565f0e957d971e2c57cb0ee72d4cea18027f4b7d665a3fae83061f81e7: Status 404 returned error can't find the container with id 81c2c1565f0e957d971e2c57cb0ee72d4cea18027f4b7d665a3fae83061f81e7 Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.364006 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfwhp"] Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.420494 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.442348 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.442745 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:31.942729976 +0000 UTC m=+153.529513425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.543802 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.544131 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.044108221 +0000 UTC m=+153.630891670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.555816 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a1d505b-82c9-4c08-9c49-95c9bcde1d03","Type":"ContainerStarted","Data":"d1f069063f023858d2333149e5bb835c7e0f845a3d0f6daed69c154dabda9a34"} Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.559689 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" event={"ID":"7415bf9f-2145-43a1-b6b8-121b39180dbd","Type":"ContainerStarted","Data":"71635a6b399c52fceeb4ed467f34dc07f05628d32413f83e54bf2f2a80131362"} Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.571179 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerStarted","Data":"81c2c1565f0e957d971e2c57cb0ee72d4cea18027f4b7d665a3fae83061f81e7"} Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.579115 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerStarted","Data":"2fb581a7946a97829584f6295bc5aea1fe2a5a51892b8d1781226c377dba2a1a"} Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.587497 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb28n" event={"ID":"7872d916-7101-4078-a051-702427c0321f","Type":"ContainerStarted","Data":"9dd9b2014eebf930ba3a86e71b918dfb7ce3fc94f1238730f2c3d20d5f8a09ef"} Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.617117 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerStarted","Data":"552deedc6425f4305d1e805902b9d9572be17789bd7dd0a64f83895280ab4ed5"} Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.628657 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:31 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:31 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:31 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.628703 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.644466 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.644599 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.144571825 +0000 UTC m=+153.731355274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.644788 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.645112 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.145101158 +0000 UTC m=+153.731884607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.746501 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.746674 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.246641877 +0000 UTC m=+153.833425326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.746849 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.747532 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.247337285 +0000 UTC m=+153.834120734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.848493 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.849134 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.349116879 +0000 UTC m=+153.935900328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.949940 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.950444 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:31 crc kubenswrapper[4860]: E1014 14:51:31.950868 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.450852884 +0000 UTC m=+154.037636333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:31 crc kubenswrapper[4860]: I1014 14:51:31.958828 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 14:51:31 crc kubenswrapper[4860]: W1014 14:51:31.964555 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod552fe4d5_540a_4b65_9105_013cb46c4abc.slice/crio-d40b6ae5cf1d7cb5eb1daf1653a35faec764072ecfe11a98a7e82a8e30612dec WatchSource:0}: Error finding container d40b6ae5cf1d7cb5eb1daf1653a35faec764072ecfe11a98a7e82a8e30612dec: Status 404 returned error can't find the container with id d40b6ae5cf1d7cb5eb1daf1653a35faec764072ecfe11a98a7e82a8e30612dec Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.051495 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-secret-volume\") pod \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.051582 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-config-volume\") pod \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.051613 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2726\" (UniqueName: \"kubernetes.io/projected/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-kube-api-access-x2726\") pod \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\" (UID: \"0fd546c2-8f3f-459f-bd94-75f8d755d9e5\") " Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.051781 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.052096 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.552019384 +0000 UTC m=+154.138802833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.053496 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fd546c2-8f3f-459f-bd94-75f8d755d9e5" (UID: "0fd546c2-8f3f-459f-bd94-75f8d755d9e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.063190 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fd546c2-8f3f-459f-bd94-75f8d755d9e5" (UID: "0fd546c2-8f3f-459f-bd94-75f8d755d9e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.066204 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-kube-api-access-x2726" (OuterVolumeSpecName: "kube-api-access-x2726") pod "0fd546c2-8f3f-459f-bd94-75f8d755d9e5" (UID: "0fd546c2-8f3f-459f-bd94-75f8d755d9e5"). InnerVolumeSpecName "kube-api-access-x2726". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.155746 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.156097 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.156110 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.156121 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2726\" (UniqueName: \"kubernetes.io/projected/0fd546c2-8f3f-459f-bd94-75f8d755d9e5-kube-api-access-x2726\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.156304 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.65628514 +0000 UTC m=+154.243068599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.215253 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rbf"] Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.215489 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd546c2-8f3f-459f-bd94-75f8d755d9e5" containerName="collect-profiles" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.215510 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd546c2-8f3f-459f-bd94-75f8d755d9e5" containerName="collect-profiles" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.215635 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd546c2-8f3f-459f-bd94-75f8d755d9e5" containerName="collect-profiles" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.216559 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.221667 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.235615 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rbf"] Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.326429 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.326509 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.826491635 +0000 UTC m=+154.413275084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.326556 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-utilities\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.326585 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.326620 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-catalog-content\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.326654 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfv9n\" (UniqueName: \"kubernetes.io/projected/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-kube-api-access-sfv9n\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.326946 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.826928476 +0000 UTC m=+154.413711925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.407100 4860 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.427709 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.427813 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.927795449 +0000 UTC m=+154.514578898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.428162 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-utilities\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.428206 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.428252 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-catalog-content\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.428305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfv9n\" (UniqueName: \"kubernetes.io/projected/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-kube-api-access-sfv9n\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.428910 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:32.928889275 +0000 UTC m=+154.515672794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.428993 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-utilities\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.429055 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-catalog-content\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.445961 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfv9n\" (UniqueName: \"kubernetes.io/projected/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-kube-api-access-sfv9n\") pod \"redhat-marketplace-m7rbf\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.529210 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.529353 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.029336968 +0000 UTC m=+154.616120417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.529792 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.530158 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.030141748 +0000 UTC m=+154.616925197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.617413 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4m75d"] Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.618616 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.631410 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.631606 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.131566434 +0000 UTC m=+154.718349883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.631951 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-utilities\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.632082 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.632191 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-catalog-content\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.632277 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6sqv\" (UniqueName: \"kubernetes.io/projected/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-kube-api-access-t6sqv\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.632384 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.132376124 +0000 UTC m=+154.719159573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.634641 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4m75d"] Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.638381 4860 generic.go:334] "Generic (PLEG): container finished" podID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerID="7e87efdb49d7b69fd2cfd64c491de3d2947beee774bd06e6fa562dd1faac7860" exitCode=0 Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.638491 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerDied","Data":"7e87efdb49d7b69fd2cfd64c491de3d2947beee774bd06e6fa562dd1faac7860"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.641038 4860 generic.go:334] "Generic (PLEG): container finished" podID="7872d916-7101-4078-a051-702427c0321f" containerID="c1656a4995c8107d4bfccd69a75d744bd3840c514fd385c414c77f0e71c3a9f1" exitCode=0 Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.641099 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb28n" event={"ID":"7872d916-7101-4078-a051-702427c0321f","Type":"ContainerDied","Data":"c1656a4995c8107d4bfccd69a75d744bd3840c514fd385c414c77f0e71c3a9f1"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.643244 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.644039 4860 generic.go:334] "Generic (PLEG): container finished" podID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerID="92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a" exitCode=0 Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.644121 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerDied","Data":"92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.653014 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:32 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:32 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:32 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.653124 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.666313 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a1d505b-82c9-4c08-9c49-95c9bcde1d03","Type":"ContainerStarted","Data":"2d0a9c8542246bbafdc0d1ffa53fbf5442e893c4b98ac6f89b963e8746a44844"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.672899 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" event={"ID":"7415bf9f-2145-43a1-b6b8-121b39180dbd","Type":"ContainerStarted","Data":"32ed7e84b31289bc2f2d99351d5ec883867db72b3f2fb5abe123e5a4f1e1873f"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.674557 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"552fe4d5-540a-4b65-9105-013cb46c4abc","Type":"ContainerStarted","Data":"d40b6ae5cf1d7cb5eb1daf1653a35faec764072ecfe11a98a7e82a8e30612dec"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.676521 4860 generic.go:334] "Generic (PLEG): container finished" podID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerID="4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640" exitCode=0 Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.676557 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerDied","Data":"4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.686083 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" event={"ID":"0fd546c2-8f3f-459f-bd94-75f8d755d9e5","Type":"ContainerDied","Data":"1add0fdbb61e5ef29e4f6a0897ed013bd40c561867ba7369cb7805bdb77c0f89"} Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.686117 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1add0fdbb61e5ef29e4f6a0897ed013bd40c561867ba7369cb7805bdb77c0f89" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.686167 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.687561 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.733159 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.733474 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-utilities\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.733632 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-catalog-content\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.733735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6sqv\" (UniqueName: \"kubernetes.io/projected/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-kube-api-access-t6sqv\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.734466 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.234452187 +0000 UTC m=+154.821235636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.734893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-utilities\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.735228 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-catalog-content\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.782901 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6sqv\" (UniqueName: \"kubernetes.io/projected/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-kube-api-access-t6sqv\") pod \"redhat-marketplace-4m75d\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.834934 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.835402 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.33534617 +0000 UTC m=+154.922129669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.936649 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.936812 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.436778657 +0000 UTC m=+155.023562116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.937224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:32 crc kubenswrapper[4860]: E1014 14:51:32.937547 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.437534205 +0000 UTC m=+155.024317654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:32 crc kubenswrapper[4860]: I1014 14:51:32.943892 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.039681 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:33 crc kubenswrapper[4860]: E1014 14:51:33.040062 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.540047788 +0000 UTC m=+155.126831237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.141052 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:33 crc kubenswrapper[4860]: E1014 14:51:33.141338 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.641325811 +0000 UTC m=+155.228109260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.156976 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.162585 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z2m7d" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.200332 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4m75d"] Oct 14 14:51:33 crc kubenswrapper[4860]: W1014 14:51:33.207717 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5fb0dec_1ed9_4a83_8a55_4f229f200cf8.slice/crio-a14ec000d14c89a52861bd1b9ff9b803d7fc970b54760cf7112544b8c6fbef92 WatchSource:0}: Error finding container a14ec000d14c89a52861bd1b9ff9b803d7fc970b54760cf7112544b8c6fbef92: Status 404 returned error can't find the container with id a14ec000d14c89a52861bd1b9ff9b803d7fc970b54760cf7112544b8c6fbef92 Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.215859 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rbf"] Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.225467 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ll2pt"] Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.234361 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.239786 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.242208 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:33 crc kubenswrapper[4860]: E1014 14:51:33.242695 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.742670756 +0000 UTC m=+155.329454205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.244294 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ll2pt"] Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.343308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.343648 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-catalog-content\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.343670 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-utilities\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.343689 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxkmm\" (UniqueName: \"kubernetes.io/projected/221c2ea0-2c26-436a-a2cd-f77091de581f-kube-api-access-sxkmm\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: E1014 14:51:33.344657 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 14:51:33.844645525 +0000 UTC m=+155.431428974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-msfwt" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.346201 4860 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-14T14:51:32.40712697Z","Handler":null,"Name":""} Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.351630 4860 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.351664 4860 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.445080 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.445411 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-catalog-content\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.445452 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-utilities\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.445479 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxkmm\" (UniqueName: \"kubernetes.io/projected/221c2ea0-2c26-436a-a2cd-f77091de581f-kube-api-access-sxkmm\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.446428 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-catalog-content\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.446719 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-utilities\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.452457 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.465839 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxkmm\" (UniqueName: \"kubernetes.io/projected/221c2ea0-2c26-436a-a2cd-f77091de581f-kube-api-access-sxkmm\") pod \"redhat-operators-ll2pt\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.546418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.548601 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.548651 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.555053 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.617468 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-msfwt\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.631326 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-npmpc"] Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.632557 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.631330 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:33 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:33 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:33 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.633298 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.651392 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-npmpc"] Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.693835 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rbf" event={"ID":"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a","Type":"ContainerStarted","Data":"547f7b78e1b4a48615c61262110b778d70b0dfd0c171d0635362cba031ecd752"} Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.696415 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4m75d" event={"ID":"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8","Type":"ContainerStarted","Data":"a14ec000d14c89a52861bd1b9ff9b803d7fc970b54760cf7112544b8c6fbef92"} Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.701407 4860 generic.go:334] "Generic (PLEG): container finished" podID="1a1d505b-82c9-4c08-9c49-95c9bcde1d03" containerID="2d0a9c8542246bbafdc0d1ffa53fbf5442e893c4b98ac6f89b963e8746a44844" exitCode=0 Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.701474 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a1d505b-82c9-4c08-9c49-95c9bcde1d03","Type":"ContainerDied","Data":"2d0a9c8542246bbafdc0d1ffa53fbf5442e893c4b98ac6f89b963e8746a44844"} Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.703883 4860 generic.go:334] "Generic (PLEG): container finished" podID="552fe4d5-540a-4b65-9105-013cb46c4abc" containerID="fd8c58f75991ea4e5d0c2e4f57954b2c16a03a548607fec3ea55d9dc86efbb72" exitCode=0 Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.703921 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"552fe4d5-540a-4b65-9105-013cb46c4abc","Type":"ContainerDied","Data":"fd8c58f75991ea4e5d0c2e4f57954b2c16a03a548607fec3ea55d9dc86efbb72"} Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.708191 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" event={"ID":"7415bf9f-2145-43a1-b6b8-121b39180dbd","Type":"ContainerStarted","Data":"dc2ba97901a23705e69da7885ed1efe5eab1e2f619bd15ee560fb88176ad304f"} Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.752242 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-catalog-content\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.752315 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-utilities\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.752334 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbmt\" (UniqueName: \"kubernetes.io/projected/4d599715-5ef5-4fca-adc9-9f1edad3be77-kube-api-access-5qbmt\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.752932 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-srxmc" podStartSLOduration=16.752911374 podStartE2EDuration="16.752911374s" podCreationTimestamp="2025-10-14 14:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:33.751811208 +0000 UTC m=+155.338594657" watchObservedRunningTime="2025-10-14 14:51:33.752911374 +0000 UTC m=+155.339694843" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.791501 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.863806 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-catalog-content\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.863875 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-utilities\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.863897 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbmt\" (UniqueName: \"kubernetes.io/projected/4d599715-5ef5-4fca-adc9-9f1edad3be77-kube-api-access-5qbmt\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.864827 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-utilities\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.864900 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-catalog-content\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.868756 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ll2pt"] Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.892048 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbmt\" (UniqueName: \"kubernetes.io/projected/4d599715-5ef5-4fca-adc9-9f1edad3be77-kube-api-access-5qbmt\") pod \"redhat-operators-npmpc\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:33 crc kubenswrapper[4860]: I1014 14:51:33.956488 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.084721 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.274521 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kubelet-dir\") pod \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.274608 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kube-api-access\") pod \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\" (UID: \"1a1d505b-82c9-4c08-9c49-95c9bcde1d03\") " Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.274744 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a1d505b-82c9-4c08-9c49-95c9bcde1d03" (UID: "1a1d505b-82c9-4c08-9c49-95c9bcde1d03"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.274906 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.281665 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a1d505b-82c9-4c08-9c49-95c9bcde1d03" (UID: "1a1d505b-82c9-4c08-9c49-95c9bcde1d03"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.339685 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-npmpc"] Oct 14 14:51:34 crc kubenswrapper[4860]: W1014 14:51:34.354551 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d599715_5ef5_4fca_adc9_9f1edad3be77.slice/crio-5a1cc155b52371d1c0d8319cebc4892d295a9692f13a3e6a85f342aa23fbb722 WatchSource:0}: Error finding container 5a1cc155b52371d1c0d8319cebc4892d295a9692f13a3e6a85f342aa23fbb722: Status 404 returned error can't find the container with id 5a1cc155b52371d1c0d8319cebc4892d295a9692f13a3e6a85f342aa23fbb722 Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.375985 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a1d505b-82c9-4c08-9c49-95c9bcde1d03-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.387656 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-msfwt"] Oct 14 14:51:34 crc kubenswrapper[4860]: W1014 14:51:34.390247 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3beff9b_3e98_4d7d_88b0_bbe3271dcb78.slice/crio-59ed0064670c981e075f3981cb3c759ac359a8914f00baa0c78da7238837cebf WatchSource:0}: Error finding container 59ed0064670c981e075f3981cb3c759ac359a8914f00baa0c78da7238837cebf: Status 404 returned error can't find the container with id 59ed0064670c981e075f3981cb3c759ac359a8914f00baa0c78da7238837cebf Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.630268 4860 patch_prober.go:28] interesting pod/router-default-5444994796-cv25g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 14:51:34 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Oct 14 14:51:34 crc kubenswrapper[4860]: [+]process-running ok Oct 14 14:51:34 crc kubenswrapper[4860]: healthz check failed Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.630340 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cv25g" podUID="ae97ab9f-b072-4cb2-85da-577097382ed5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.712826 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a1d505b-82c9-4c08-9c49-95c9bcde1d03","Type":"ContainerDied","Data":"d1f069063f023858d2333149e5bb835c7e0f845a3d0f6daed69c154dabda9a34"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.712865 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f069063f023858d2333149e5bb835c7e0f845a3d0f6daed69c154dabda9a34" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.712939 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.714853 4860 generic.go:334] "Generic (PLEG): container finished" podID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerID="bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70" exitCode=0 Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.714912 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rbf" event={"ID":"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a","Type":"ContainerDied","Data":"bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.716945 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" event={"ID":"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78","Type":"ContainerStarted","Data":"59ed0064670c981e075f3981cb3c759ac359a8914f00baa0c78da7238837cebf"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.718265 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerStarted","Data":"5a1cc155b52371d1c0d8319cebc4892d295a9692f13a3e6a85f342aa23fbb722"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.720413 4860 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerID="d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a" exitCode=0 Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.720640 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4m75d" event={"ID":"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8","Type":"ContainerDied","Data":"d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.723744 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerStarted","Data":"0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.723786 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerStarted","Data":"6cb47c50cc1918172218d9fd24ec33c82cfe405819b1c88e5013bc353670808d"} Oct 14 14:51:34 crc kubenswrapper[4860]: I1014 14:51:34.940741 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.075006 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.084444 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552fe4d5-540a-4b65-9105-013cb46c4abc-kube-api-access\") pod \"552fe4d5-540a-4b65-9105-013cb46c4abc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.084587 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552fe4d5-540a-4b65-9105-013cb46c4abc-kubelet-dir\") pod \"552fe4d5-540a-4b65-9105-013cb46c4abc\" (UID: \"552fe4d5-540a-4b65-9105-013cb46c4abc\") " Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.084661 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552fe4d5-540a-4b65-9105-013cb46c4abc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "552fe4d5-540a-4b65-9105-013cb46c4abc" (UID: "552fe4d5-540a-4b65-9105-013cb46c4abc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.084816 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552fe4d5-540a-4b65-9105-013cb46c4abc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.090320 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552fe4d5-540a-4b65-9105-013cb46c4abc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "552fe4d5-540a-4b65-9105-013cb46c4abc" (UID: "552fe4d5-540a-4b65-9105-013cb46c4abc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.186121 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552fe4d5-540a-4b65-9105-013cb46c4abc-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.498605 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vnc8p" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.628726 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.632896 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-cv25g" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.744942 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" event={"ID":"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78","Type":"ContainerStarted","Data":"a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff"} Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.745056 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.751070 4860 generic.go:334] "Generic (PLEG): container finished" podID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerID="677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1" exitCode=0 Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.751195 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerDied","Data":"677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1"} Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.754464 4860 generic.go:334] "Generic (PLEG): container finished" podID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerID="0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0" exitCode=0 Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.754527 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerDied","Data":"0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0"} Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.760834 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.760887 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"552fe4d5-540a-4b65-9105-013cb46c4abc","Type":"ContainerDied","Data":"d40b6ae5cf1d7cb5eb1daf1653a35faec764072ecfe11a98a7e82a8e30612dec"} Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.760910 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40b6ae5cf1d7cb5eb1daf1653a35faec764072ecfe11a98a7e82a8e30612dec" Oct 14 14:51:35 crc kubenswrapper[4860]: I1014 14:51:35.776790 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" podStartSLOduration=136.776771435 podStartE2EDuration="2m16.776771435s" podCreationTimestamp="2025-10-14 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:51:35.771614271 +0000 UTC m=+157.358397720" watchObservedRunningTime="2025-10-14 14:51:35.776771435 +0000 UTC m=+157.363554884" Oct 14 14:51:39 crc kubenswrapper[4860]: I1014 14:51:39.648972 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:39 crc kubenswrapper[4860]: I1014 14:51:39.649530 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:39 crc kubenswrapper[4860]: I1014 14:51:39.649065 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:39 crc kubenswrapper[4860]: I1014 14:51:39.649687 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:40 crc kubenswrapper[4860]: I1014 14:51:40.450471 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:40 crc kubenswrapper[4860]: I1014 14:51:40.455746 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 14:51:41 crc kubenswrapper[4860]: I1014 14:51:41.291546 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:41 crc kubenswrapper[4860]: I1014 14:51:41.299103 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b36dd73-c75d-446e-85fe-d11afdd5a816-metrics-certs\") pod \"network-metrics-daemon-vtscw\" (UID: \"2b36dd73-c75d-446e-85fe-d11afdd5a816\") " pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:41 crc kubenswrapper[4860]: I1014 14:51:41.496339 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtscw" Oct 14 14:51:42 crc kubenswrapper[4860]: I1014 14:51:42.042068 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vtscw"] Oct 14 14:51:42 crc kubenswrapper[4860]: W1014 14:51:42.066050 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b36dd73_c75d_446e_85fe_d11afdd5a816.slice/crio-2c2940a2c871853754ac9965ac6a80e5a11bd8b61b50556eca913455f95d2218 WatchSource:0}: Error finding container 2c2940a2c871853754ac9965ac6a80e5a11bd8b61b50556eca913455f95d2218: Status 404 returned error can't find the container with id 2c2940a2c871853754ac9965ac6a80e5a11bd8b61b50556eca913455f95d2218 Oct 14 14:51:42 crc kubenswrapper[4860]: I1014 14:51:42.833788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vtscw" event={"ID":"2b36dd73-c75d-446e-85fe-d11afdd5a816","Type":"ContainerStarted","Data":"2c2940a2c871853754ac9965ac6a80e5a11bd8b61b50556eca913455f95d2218"} Oct 14 14:51:43 crc kubenswrapper[4860]: I1014 14:51:43.850137 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vtscw" event={"ID":"2b36dd73-c75d-446e-85fe-d11afdd5a816","Type":"ContainerStarted","Data":"2fdf849f33155ac53f19bbffcff2f63472aadd8311f4820ec9f7bdde1560c995"} Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.649558 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.649913 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.649971 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.649570 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.650123 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.650554 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"fb055d3ac7b35bf8a414d577aec4edb7dc68e4f60c84d6696a3de57c4187214b"} pod="openshift-console/downloads-7954f5f757-bvxsd" containerMessage="Container download-server failed liveness probe, will be restarted" Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.650664 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" containerID="cri-o://fb055d3ac7b35bf8a414d577aec4edb7dc68e4f60c84d6696a3de57c4187214b" gracePeriod=2 Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.650872 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:49 crc kubenswrapper[4860]: I1014 14:51:49.650915 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:51:51 crc kubenswrapper[4860]: I1014 14:51:51.900434 4860 generic.go:334] "Generic (PLEG): container finished" podID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerID="fb055d3ac7b35bf8a414d577aec4edb7dc68e4f60c84d6696a3de57c4187214b" exitCode=0 Oct 14 14:51:51 crc kubenswrapper[4860]: I1014 14:51:51.900507 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bvxsd" event={"ID":"6224e386-928b-4d64-a7f5-d43fb86e4b3a","Type":"ContainerDied","Data":"fb055d3ac7b35bf8a414d577aec4edb7dc68e4f60c84d6696a3de57c4187214b"} Oct 14 14:51:53 crc kubenswrapper[4860]: I1014 14:51:53.799398 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:51:59 crc kubenswrapper[4860]: I1014 14:51:59.245365 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:51:59 crc kubenswrapper[4860]: I1014 14:51:59.246014 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:51:59 crc kubenswrapper[4860]: I1014 14:51:59.649673 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:51:59 crc kubenswrapper[4860]: I1014 14:51:59.649728 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:00 crc kubenswrapper[4860]: I1014 14:52:00.389770 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbr7g" Oct 14 14:52:07 crc kubenswrapper[4860]: I1014 14:52:07.094264 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 14:52:09 crc kubenswrapper[4860]: I1014 14:52:09.649800 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:52:09 crc kubenswrapper[4860]: I1014 14:52:09.649852 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:10 crc kubenswrapper[4860]: E1014 14:52:10.917943 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 14 14:52:10 crc kubenswrapper[4860]: E1014 14:52:10.918477 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxkmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ll2pt_openshift-marketplace(221c2ea0-2c26-436a-a2cd-f77091de581f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:10 crc kubenswrapper[4860]: E1014 14:52:10.920107 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ll2pt" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" Oct 14 14:52:10 crc kubenswrapper[4860]: E1014 14:52:10.922176 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 14 14:52:10 crc kubenswrapper[4860]: E1014 14:52:10.922254 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qbmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-npmpc_openshift-marketplace(4d599715-5ef5-4fca-adc9-9f1edad3be77): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:10 crc kubenswrapper[4860]: E1014 14:52:10.923536 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-npmpc" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" Oct 14 14:52:11 crc kubenswrapper[4860]: E1014 14:52:11.481367 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ll2pt" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" Oct 14 14:52:11 crc kubenswrapper[4860]: E1014 14:52:11.481363 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-npmpc" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" Oct 14 14:52:11 crc kubenswrapper[4860]: E1014 14:52:11.535191 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 14 14:52:11 crc kubenswrapper[4860]: E1014 14:52:11.535357 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfv9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m7rbf_openshift-marketplace(8da0ebd6-f5ac-4668-ada7-f71605ae4c4a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:11 crc kubenswrapper[4860]: E1014 14:52:11.536687 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m7rbf" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" Oct 14 14:52:12 crc kubenswrapper[4860]: E1014 14:52:12.916716 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m7rbf" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" Oct 14 14:52:12 crc kubenswrapper[4860]: E1014 14:52:12.977393 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 14:52:12 crc kubenswrapper[4860]: E1014 14:52:12.977537 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f968n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gd4lz_openshift-marketplace(cf902d5c-75ec-4993-8a0f-2a188b2826e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:12 crc kubenswrapper[4860]: E1014 14:52:12.978750 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gd4lz" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.448246 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gd4lz" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.556178 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.556310 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44fsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kxhlz_openshift-marketplace(904d68d4-d22d-483b-9fac-9fb2db95898f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.557689 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kxhlz" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.653119 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.653366 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n89x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nfwhp_openshift-marketplace(cf4ed01a-ec4e-41b6-90be-b246b51da828): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.654738 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nfwhp" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.718888 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.719373 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvcbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xb28n_openshift-marketplace(7872d916-7101-4078-a051-702427c0321f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 14:52:14 crc kubenswrapper[4860]: E1014 14:52:14.720517 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xb28n" podUID="7872d916-7101-4078-a051-702427c0321f" Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.019253 4860 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerID="b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873" exitCode=0 Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.019319 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4m75d" event={"ID":"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8","Type":"ContainerDied","Data":"b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873"} Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.024144 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vtscw" event={"ID":"2b36dd73-c75d-446e-85fe-d11afdd5a816","Type":"ContainerStarted","Data":"3d933f073d7a41c6fca70ec5a40cf2c82916f84e11828c7a63e86ff8257ee8e6"} Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.029801 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bvxsd" event={"ID":"6224e386-928b-4d64-a7f5-d43fb86e4b3a","Type":"ContainerStarted","Data":"d82d06a8986eeddad0d81b5cc7395a78eae65c29c83de68015c9137b01651d0d"} Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.031726 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.031765 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.031797 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:15 crc kubenswrapper[4860]: E1014 14:52:15.032935 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kxhlz" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" Oct 14 14:52:15 crc kubenswrapper[4860]: E1014 14:52:15.033198 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nfwhp" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" Oct 14 14:52:15 crc kubenswrapper[4860]: E1014 14:52:15.033271 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xb28n" podUID="7872d916-7101-4078-a051-702427c0321f" Oct 14 14:52:15 crc kubenswrapper[4860]: I1014 14:52:15.129603 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vtscw" podStartSLOduration=177.129588219 podStartE2EDuration="2m57.129588219s" podCreationTimestamp="2025-10-14 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:52:15.123649676 +0000 UTC m=+196.710433145" watchObservedRunningTime="2025-10-14 14:52:15.129588219 +0000 UTC m=+196.716371668" Oct 14 14:52:16 crc kubenswrapper[4860]: I1014 14:52:16.036613 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:52:16 crc kubenswrapper[4860]: I1014 14:52:16.036703 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:17 crc kubenswrapper[4860]: I1014 14:52:17.039075 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:52:17 crc kubenswrapper[4860]: I1014 14:52:17.039134 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:18 crc kubenswrapper[4860]: I1014 14:52:18.045291 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4m75d" event={"ID":"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8","Type":"ContainerStarted","Data":"bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2"} Oct 14 14:52:18 crc kubenswrapper[4860]: I1014 14:52:18.060729 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4m75d" podStartSLOduration=3.581616189 podStartE2EDuration="46.0607147s" podCreationTimestamp="2025-10-14 14:51:32 +0000 UTC" firstStartedPulling="2025-10-14 14:51:34.725625319 +0000 UTC m=+156.312408768" lastFinishedPulling="2025-10-14 14:52:17.20472383 +0000 UTC m=+198.791507279" observedRunningTime="2025-10-14 14:52:18.060465994 +0000 UTC m=+199.647249443" watchObservedRunningTime="2025-10-14 14:52:18.0607147 +0000 UTC m=+199.647498149" Oct 14 14:52:19 crc kubenswrapper[4860]: I1014 14:52:19.649230 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:52:19 crc kubenswrapper[4860]: I1014 14:52:19.649287 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvxsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 14 14:52:19 crc kubenswrapper[4860]: I1014 14:52:19.649287 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:19 crc kubenswrapper[4860]: I1014 14:52:19.649315 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvxsd" podUID="6224e386-928b-4d64-a7f5-d43fb86e4b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 14 14:52:22 crc kubenswrapper[4860]: I1014 14:52:22.944389 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:52:22 crc kubenswrapper[4860]: I1014 14:52:22.945083 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:52:23 crc kubenswrapper[4860]: I1014 14:52:23.358118 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:52:23 crc kubenswrapper[4860]: I1014 14:52:23.397947 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:52:23 crc kubenswrapper[4860]: I1014 14:52:23.587593 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4m75d"] Oct 14 14:52:25 crc kubenswrapper[4860]: I1014 14:52:25.093251 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4m75d" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="registry-server" containerID="cri-o://bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2" gracePeriod=2 Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.072667 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.120824 4860 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerID="bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2" exitCode=0 Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.120867 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4m75d" event={"ID":"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8","Type":"ContainerDied","Data":"bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2"} Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.120893 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4m75d" event={"ID":"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8","Type":"ContainerDied","Data":"a14ec000d14c89a52861bd1b9ff9b803d7fc970b54760cf7112544b8c6fbef92"} Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.120909 4860 scope.go:117] "RemoveContainer" containerID="bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.121071 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4m75d" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.145668 4860 scope.go:117] "RemoveContainer" containerID="b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.157953 4860 scope.go:117] "RemoveContainer" containerID="d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.175670 4860 scope.go:117] "RemoveContainer" containerID="bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2" Oct 14 14:52:26 crc kubenswrapper[4860]: E1014 14:52:26.176166 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2\": container with ID starting with bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2 not found: ID does not exist" containerID="bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.176205 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2"} err="failed to get container status \"bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2\": rpc error: code = NotFound desc = could not find container \"bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2\": container with ID starting with bef42835b2bc197629bb4a7bc2df659699fa170df5574a406cfc11041261ead2 not found: ID does not exist" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.176229 4860 scope.go:117] "RemoveContainer" containerID="b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873" Oct 14 14:52:26 crc kubenswrapper[4860]: E1014 14:52:26.176608 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873\": container with ID starting with b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873 not found: ID does not exist" containerID="b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.176637 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873"} err="failed to get container status \"b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873\": rpc error: code = NotFound desc = could not find container \"b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873\": container with ID starting with b40557b1689de2e61fb080940e86c3d96dff01ee908eac3131710d70a9eec873 not found: ID does not exist" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.176656 4860 scope.go:117] "RemoveContainer" containerID="d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a" Oct 14 14:52:26 crc kubenswrapper[4860]: E1014 14:52:26.176854 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a\": container with ID starting with d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a not found: ID does not exist" containerID="d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.176894 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a"} err="failed to get container status \"d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a\": rpc error: code = NotFound desc = could not find container \"d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a\": container with ID starting with d10c9260090c67d8f68366797d481686550d336566ed5e9b784a29c85b395c9a not found: ID does not exist" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.184507 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6sqv\" (UniqueName: \"kubernetes.io/projected/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-kube-api-access-t6sqv\") pod \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.184578 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-catalog-content\") pod \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.184735 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-utilities\") pod \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\" (UID: \"e5fb0dec-1ed9-4a83-8a55-4f229f200cf8\") " Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.185478 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-utilities" (OuterVolumeSpecName: "utilities") pod "e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" (UID: "e5fb0dec-1ed9-4a83-8a55-4f229f200cf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.191294 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-kube-api-access-t6sqv" (OuterVolumeSpecName: "kube-api-access-t6sqv") pod "e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" (UID: "e5fb0dec-1ed9-4a83-8a55-4f229f200cf8"). InnerVolumeSpecName "kube-api-access-t6sqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.198938 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" (UID: "e5fb0dec-1ed9-4a83-8a55-4f229f200cf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.286381 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6sqv\" (UniqueName: \"kubernetes.io/projected/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-kube-api-access-t6sqv\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.286412 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.286421 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.449913 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4m75d"] Oct 14 14:52:26 crc kubenswrapper[4860]: I1014 14:52:26.455807 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4m75d"] Oct 14 14:52:27 crc kubenswrapper[4860]: I1014 14:52:27.072072 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" path="/var/lib/kubelet/pods/e5fb0dec-1ed9-4a83-8a55-4f229f200cf8/volumes" Oct 14 14:52:27 crc kubenswrapper[4860]: I1014 14:52:27.129383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerStarted","Data":"605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a"} Oct 14 14:52:27 crc kubenswrapper[4860]: I1014 14:52:27.134593 4860 generic.go:334] "Generic (PLEG): container finished" podID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerID="b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c" exitCode=0 Oct 14 14:52:27 crc kubenswrapper[4860]: I1014 14:52:27.134631 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rbf" event={"ID":"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a","Type":"ContainerDied","Data":"b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c"} Oct 14 14:52:28 crc kubenswrapper[4860]: I1014 14:52:28.141557 4860 generic.go:334] "Generic (PLEG): container finished" podID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerID="605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a" exitCode=0 Oct 14 14:52:28 crc kubenswrapper[4860]: I1014 14:52:28.141676 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerDied","Data":"605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a"} Oct 14 14:52:29 crc kubenswrapper[4860]: I1014 14:52:29.245628 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:52:29 crc kubenswrapper[4860]: I1014 14:52:29.245685 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:52:29 crc kubenswrapper[4860]: I1014 14:52:29.245727 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:52:29 crc kubenswrapper[4860]: I1014 14:52:29.246495 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:52:29 crc kubenswrapper[4860]: I1014 14:52:29.246544 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81" gracePeriod=600 Oct 14 14:52:29 crc kubenswrapper[4860]: I1014 14:52:29.658076 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bvxsd" Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.157647 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerStarted","Data":"752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.160619 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerStarted","Data":"b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.162951 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81" exitCode=0 Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.163065 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.163096 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"25f232e83add52308352a0c71839405001f25fe7657f02b1bf05e81be7c47a92"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.166835 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerStarted","Data":"f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.169265 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerStarted","Data":"7b6d4d3af929866caa6bae2db2afb9cb920d3570cc931dcf188c85ba665bd400"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.178962 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerStarted","Data":"98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.181350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rbf" event={"ID":"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a","Type":"ContainerStarted","Data":"3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317"} Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.214378 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ll2pt" podStartSLOduration=3.400342951 podStartE2EDuration="57.214363416s" podCreationTimestamp="2025-10-14 14:51:33 +0000 UTC" firstStartedPulling="2025-10-14 14:51:35.759380896 +0000 UTC m=+157.346164345" lastFinishedPulling="2025-10-14 14:52:29.573401361 +0000 UTC m=+211.160184810" observedRunningTime="2025-10-14 14:52:30.211902174 +0000 UTC m=+211.798685623" watchObservedRunningTime="2025-10-14 14:52:30.214363416 +0000 UTC m=+211.801146865" Oct 14 14:52:30 crc kubenswrapper[4860]: I1014 14:52:30.252233 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m7rbf" podStartSLOduration=3.736199324 podStartE2EDuration="58.25221726s" podCreationTimestamp="2025-10-14 14:51:32 +0000 UTC" firstStartedPulling="2025-10-14 14:51:34.716520059 +0000 UTC m=+156.303303508" lastFinishedPulling="2025-10-14 14:52:29.232537995 +0000 UTC m=+210.819321444" observedRunningTime="2025-10-14 14:52:30.250387763 +0000 UTC m=+211.837171212" watchObservedRunningTime="2025-10-14 14:52:30.25221726 +0000 UTC m=+211.839000709" Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.189790 4860 generic.go:334] "Generic (PLEG): container finished" podID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerID="752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8" exitCode=0 Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.189938 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerDied","Data":"752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8"} Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.198402 4860 generic.go:334] "Generic (PLEG): container finished" podID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerID="b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d" exitCode=0 Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.198527 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerDied","Data":"b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d"} Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.209194 4860 generic.go:334] "Generic (PLEG): container finished" podID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerID="7b6d4d3af929866caa6bae2db2afb9cb920d3570cc931dcf188c85ba665bd400" exitCode=0 Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.209281 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerDied","Data":"7b6d4d3af929866caa6bae2db2afb9cb920d3570cc931dcf188c85ba665bd400"} Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.213920 4860 generic.go:334] "Generic (PLEG): container finished" podID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerID="98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29" exitCode=0 Oct 14 14:52:31 crc kubenswrapper[4860]: I1014 14:52:31.213957 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerDied","Data":"98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29"} Oct 14 14:52:32 crc kubenswrapper[4860]: I1014 14:52:32.220178 4860 generic.go:334] "Generic (PLEG): container finished" podID="7872d916-7101-4078-a051-702427c0321f" containerID="0fca09d86c13eb607eef613355ae2ae8f43ca47550c8e067ce516200e8331600" exitCode=0 Oct 14 14:52:32 crc kubenswrapper[4860]: I1014 14:52:32.220437 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb28n" event={"ID":"7872d916-7101-4078-a051-702427c0321f","Type":"ContainerDied","Data":"0fca09d86c13eb607eef613355ae2ae8f43ca47550c8e067ce516200e8331600"} Oct 14 14:52:32 crc kubenswrapper[4860]: I1014 14:52:32.688940 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:52:32 crc kubenswrapper[4860]: I1014 14:52:32.689338 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:52:32 crc kubenswrapper[4860]: I1014 14:52:32.730877 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:52:33 crc kubenswrapper[4860]: I1014 14:52:33.555824 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:52:33 crc kubenswrapper[4860]: I1014 14:52:33.556772 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:52:34 crc kubenswrapper[4860]: I1014 14:52:34.605779 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ll2pt" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="registry-server" probeResult="failure" output=< Oct 14 14:52:34 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 14:52:34 crc kubenswrapper[4860]: > Oct 14 14:52:36 crc kubenswrapper[4860]: I1014 14:52:36.242345 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerStarted","Data":"1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb"} Oct 14 14:52:36 crc kubenswrapper[4860]: I1014 14:52:36.274061 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kxhlz" podStartSLOduration=4.25441942 podStartE2EDuration="1m6.274047636s" podCreationTimestamp="2025-10-14 14:51:30 +0000 UTC" firstStartedPulling="2025-10-14 14:51:32.678013304 +0000 UTC m=+154.264796753" lastFinishedPulling="2025-10-14 14:52:34.69764149 +0000 UTC m=+216.284424969" observedRunningTime="2025-10-14 14:52:36.271855771 +0000 UTC m=+217.858639220" watchObservedRunningTime="2025-10-14 14:52:36.274047636 +0000 UTC m=+217.860831085" Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.257780 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerStarted","Data":"ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492"} Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.260365 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerStarted","Data":"fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c"} Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.265457 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerStarted","Data":"a118c560fb29fae5482cd392af1250900626746406d8eddc50df9ec8347b1214"} Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.267650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb28n" event={"ID":"7872d916-7101-4078-a051-702427c0321f","Type":"ContainerStarted","Data":"b4fc99a1cc9d8bb553a163a25266d8de9373ad4f315f164af6346b622ad44d36"} Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.282691 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nfwhp" podStartSLOduration=3.733004144 podStartE2EDuration="1m7.282673981s" podCreationTimestamp="2025-10-14 14:51:30 +0000 UTC" firstStartedPulling="2025-10-14 14:51:32.653242808 +0000 UTC m=+154.240026257" lastFinishedPulling="2025-10-14 14:52:36.202912645 +0000 UTC m=+217.789696094" observedRunningTime="2025-10-14 14:52:37.279621045 +0000 UTC m=+218.866404504" watchObservedRunningTime="2025-10-14 14:52:37.282673981 +0000 UTC m=+218.869457430" Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.300691 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xb28n" podStartSLOduration=3.612349206 podStartE2EDuration="1m7.300674935s" podCreationTimestamp="2025-10-14 14:51:30 +0000 UTC" firstStartedPulling="2025-10-14 14:51:32.643057571 +0000 UTC m=+154.229841020" lastFinishedPulling="2025-10-14 14:52:36.3313833 +0000 UTC m=+217.918166749" observedRunningTime="2025-10-14 14:52:37.299970997 +0000 UTC m=+218.886754446" watchObservedRunningTime="2025-10-14 14:52:37.300674935 +0000 UTC m=+218.887458384" Oct 14 14:52:37 crc kubenswrapper[4860]: I1014 14:52:37.324592 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-npmpc" podStartSLOduration=4.009699567 podStartE2EDuration="1m4.324574477s" podCreationTimestamp="2025-10-14 14:51:33 +0000 UTC" firstStartedPulling="2025-10-14 14:51:35.753184497 +0000 UTC m=+157.339967946" lastFinishedPulling="2025-10-14 14:52:36.068059387 +0000 UTC m=+217.654842856" observedRunningTime="2025-10-14 14:52:37.322482804 +0000 UTC m=+218.909266253" watchObservedRunningTime="2025-10-14 14:52:37.324574477 +0000 UTC m=+218.911357926" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.383277 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.383893 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.432513 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.449407 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gd4lz" podStartSLOduration=6.710178773 podStartE2EDuration="1m10.449389854s" podCreationTimestamp="2025-10-14 14:51:30 +0000 UTC" firstStartedPulling="2025-10-14 14:51:32.642984159 +0000 UTC m=+154.229767608" lastFinishedPulling="2025-10-14 14:52:36.38219524 +0000 UTC m=+217.968978689" observedRunningTime="2025-10-14 14:52:37.346483978 +0000 UTC m=+218.933267457" watchObservedRunningTime="2025-10-14 14:52:40.449389854 +0000 UTC m=+222.036173303" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.543344 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.543403 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.594216 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.814311 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.814671 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:52:40 crc kubenswrapper[4860]: I1014 14:52:40.854503 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.014933 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.015014 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.056744 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.382067 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.385361 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.472331 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:52:41 crc kubenswrapper[4860]: I1014 14:52:41.502980 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:52:42 crc kubenswrapper[4860]: I1014 14:52:42.751360 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:52:42 crc kubenswrapper[4860]: I1014 14:52:42.806862 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxhlz"] Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.298444 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kxhlz" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="registry-server" containerID="cri-o://1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb" gracePeriod=2 Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.388094 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfwhp"] Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.391137 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nfwhp" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="registry-server" containerID="cri-o://ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492" gracePeriod=2 Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.597885 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.680628 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.722686 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.757109 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.921746 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n89x4\" (UniqueName: \"kubernetes.io/projected/cf4ed01a-ec4e-41b6-90be-b246b51da828-kube-api-access-n89x4\") pod \"cf4ed01a-ec4e-41b6-90be-b246b51da828\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.921827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44fsr\" (UniqueName: \"kubernetes.io/projected/904d68d4-d22d-483b-9fac-9fb2db95898f-kube-api-access-44fsr\") pod \"904d68d4-d22d-483b-9fac-9fb2db95898f\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.921848 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-utilities\") pod \"cf4ed01a-ec4e-41b6-90be-b246b51da828\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.921877 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-catalog-content\") pod \"cf4ed01a-ec4e-41b6-90be-b246b51da828\" (UID: \"cf4ed01a-ec4e-41b6-90be-b246b51da828\") " Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.921917 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-catalog-content\") pod \"904d68d4-d22d-483b-9fac-9fb2db95898f\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.921956 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-utilities\") pod \"904d68d4-d22d-483b-9fac-9fb2db95898f\" (UID: \"904d68d4-d22d-483b-9fac-9fb2db95898f\") " Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.922641 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-utilities" (OuterVolumeSpecName: "utilities") pod "cf4ed01a-ec4e-41b6-90be-b246b51da828" (UID: "cf4ed01a-ec4e-41b6-90be-b246b51da828"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.922685 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-utilities" (OuterVolumeSpecName: "utilities") pod "904d68d4-d22d-483b-9fac-9fb2db95898f" (UID: "904d68d4-d22d-483b-9fac-9fb2db95898f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.929141 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904d68d4-d22d-483b-9fac-9fb2db95898f-kube-api-access-44fsr" (OuterVolumeSpecName: "kube-api-access-44fsr") pod "904d68d4-d22d-483b-9fac-9fb2db95898f" (UID: "904d68d4-d22d-483b-9fac-9fb2db95898f"). InnerVolumeSpecName "kube-api-access-44fsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.929148 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4ed01a-ec4e-41b6-90be-b246b51da828-kube-api-access-n89x4" (OuterVolumeSpecName: "kube-api-access-n89x4") pod "cf4ed01a-ec4e-41b6-90be-b246b51da828" (UID: "cf4ed01a-ec4e-41b6-90be-b246b51da828"). InnerVolumeSpecName "kube-api-access-n89x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.957938 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.957998 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.971006 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "904d68d4-d22d-483b-9fac-9fb2db95898f" (UID: "904d68d4-d22d-483b-9fac-9fb2db95898f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.971050 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf4ed01a-ec4e-41b6-90be-b246b51da828" (UID: "cf4ed01a-ec4e-41b6-90be-b246b51da828"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:43 crc kubenswrapper[4860]: I1014 14:52:43.993820 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.023575 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.023610 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d68d4-d22d-483b-9fac-9fb2db95898f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.023652 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n89x4\" (UniqueName: \"kubernetes.io/projected/cf4ed01a-ec4e-41b6-90be-b246b51da828-kube-api-access-n89x4\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.023669 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44fsr\" (UniqueName: \"kubernetes.io/projected/904d68d4-d22d-483b-9fac-9fb2db95898f-kube-api-access-44fsr\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.023681 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.023692 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4ed01a-ec4e-41b6-90be-b246b51da828-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.304923 4860 generic.go:334] "Generic (PLEG): container finished" podID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerID="1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb" exitCode=0 Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.304971 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxhlz" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.305005 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerDied","Data":"1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb"} Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.305055 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxhlz" event={"ID":"904d68d4-d22d-483b-9fac-9fb2db95898f","Type":"ContainerDied","Data":"81c2c1565f0e957d971e2c57cb0ee72d4cea18027f4b7d665a3fae83061f81e7"} Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.305078 4860 scope.go:117] "RemoveContainer" containerID="1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.308287 4860 generic.go:334] "Generic (PLEG): container finished" podID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerID="ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492" exitCode=0 Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.308338 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerDied","Data":"ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492"} Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.308383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfwhp" event={"ID":"cf4ed01a-ec4e-41b6-90be-b246b51da828","Type":"ContainerDied","Data":"552deedc6425f4305d1e805902b9d9572be17789bd7dd0a64f83895280ab4ed5"} Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.308534 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfwhp" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.331871 4860 scope.go:117] "RemoveContainer" containerID="752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.342974 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxhlz"] Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.348004 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kxhlz"] Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.359486 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfwhp"] Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.362952 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nfwhp"] Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.363316 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.368362 4860 scope.go:117] "RemoveContainer" containerID="4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.384887 4860 scope.go:117] "RemoveContainer" containerID="1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb" Oct 14 14:52:44 crc kubenswrapper[4860]: E1014 14:52:44.385347 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb\": container with ID starting with 1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb not found: ID does not exist" containerID="1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.385373 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb"} err="failed to get container status \"1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb\": rpc error: code = NotFound desc = could not find container \"1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb\": container with ID starting with 1deeef1e4f5d515b30ea0692e67154b646041b22eb7a8bf7e190f94480fe3feb not found: ID does not exist" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.385391 4860 scope.go:117] "RemoveContainer" containerID="752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8" Oct 14 14:52:44 crc kubenswrapper[4860]: E1014 14:52:44.385698 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8\": container with ID starting with 752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8 not found: ID does not exist" containerID="752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.385716 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8"} err="failed to get container status \"752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8\": rpc error: code = NotFound desc = could not find container \"752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8\": container with ID starting with 752808e6e1f41a5e7141934c8dcfb43a6df72509a909b93407f2848be167eeb8 not found: ID does not exist" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.385731 4860 scope.go:117] "RemoveContainer" containerID="4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640" Oct 14 14:52:44 crc kubenswrapper[4860]: E1014 14:52:44.385942 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640\": container with ID starting with 4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640 not found: ID does not exist" containerID="4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.385962 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640"} err="failed to get container status \"4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640\": rpc error: code = NotFound desc = could not find container \"4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640\": container with ID starting with 4c619f4975726c5078dd222223d5a7ee42275b3debfacaec531e093d5783a640 not found: ID does not exist" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.385975 4860 scope.go:117] "RemoveContainer" containerID="ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.398480 4860 scope.go:117] "RemoveContainer" containerID="98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.412375 4860 scope.go:117] "RemoveContainer" containerID="92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.439884 4860 scope.go:117] "RemoveContainer" containerID="ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492" Oct 14 14:52:44 crc kubenswrapper[4860]: E1014 14:52:44.440263 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492\": container with ID starting with ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492 not found: ID does not exist" containerID="ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.440310 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492"} err="failed to get container status \"ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492\": rpc error: code = NotFound desc = could not find container \"ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492\": container with ID starting with ebd3fba21404671d54446a1d0db5f0643368c744f900b8f4cd99353fadce2492 not found: ID does not exist" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.440347 4860 scope.go:117] "RemoveContainer" containerID="98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29" Oct 14 14:52:44 crc kubenswrapper[4860]: E1014 14:52:44.440655 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29\": container with ID starting with 98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29 not found: ID does not exist" containerID="98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.440681 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29"} err="failed to get container status \"98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29\": rpc error: code = NotFound desc = could not find container \"98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29\": container with ID starting with 98c6ca0dca518a30f0e846f6aa04f4944c1f9219f92ff7043167e4d2ef07ec29 not found: ID does not exist" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.440701 4860 scope.go:117] "RemoveContainer" containerID="92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a" Oct 14 14:52:44 crc kubenswrapper[4860]: E1014 14:52:44.440951 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a\": container with ID starting with 92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a not found: ID does not exist" containerID="92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a" Oct 14 14:52:44 crc kubenswrapper[4860]: I1014 14:52:44.440976 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a"} err="failed to get container status \"92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a\": rpc error: code = NotFound desc = could not find container \"92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a\": container with ID starting with 92b8fd9d58d8b2440df1a3a2c1638a1f806ef9d7ad23d03ec64f588e8783a51a not found: ID does not exist" Oct 14 14:52:45 crc kubenswrapper[4860]: I1014 14:52:45.069587 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" path="/var/lib/kubelet/pods/904d68d4-d22d-483b-9fac-9fb2db95898f/volumes" Oct 14 14:52:45 crc kubenswrapper[4860]: I1014 14:52:45.071003 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" path="/var/lib/kubelet/pods/cf4ed01a-ec4e-41b6-90be-b246b51da828/volumes" Oct 14 14:52:47 crc kubenswrapper[4860]: I1014 14:52:47.786923 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-npmpc"] Oct 14 14:52:47 crc kubenswrapper[4860]: I1014 14:52:47.787162 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-npmpc" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="registry-server" containerID="cri-o://fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c" gracePeriod=2 Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.160226 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.271589 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-utilities\") pod \"4d599715-5ef5-4fca-adc9-9f1edad3be77\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.271703 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbmt\" (UniqueName: \"kubernetes.io/projected/4d599715-5ef5-4fca-adc9-9f1edad3be77-kube-api-access-5qbmt\") pod \"4d599715-5ef5-4fca-adc9-9f1edad3be77\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.271822 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-catalog-content\") pod \"4d599715-5ef5-4fca-adc9-9f1edad3be77\" (UID: \"4d599715-5ef5-4fca-adc9-9f1edad3be77\") " Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.272928 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-utilities" (OuterVolumeSpecName: "utilities") pod "4d599715-5ef5-4fca-adc9-9f1edad3be77" (UID: "4d599715-5ef5-4fca-adc9-9f1edad3be77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.277959 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d599715-5ef5-4fca-adc9-9f1edad3be77-kube-api-access-5qbmt" (OuterVolumeSpecName: "kube-api-access-5qbmt") pod "4d599715-5ef5-4fca-adc9-9f1edad3be77" (UID: "4d599715-5ef5-4fca-adc9-9f1edad3be77"). InnerVolumeSpecName "kube-api-access-5qbmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.328495 4860 generic.go:334] "Generic (PLEG): container finished" podID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerID="fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c" exitCode=0 Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.328531 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerDied","Data":"fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c"} Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.328555 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npmpc" event={"ID":"4d599715-5ef5-4fca-adc9-9f1edad3be77","Type":"ContainerDied","Data":"5a1cc155b52371d1c0d8319cebc4892d295a9692f13a3e6a85f342aa23fbb722"} Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.328570 4860 scope.go:117] "RemoveContainer" containerID="fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.328663 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npmpc" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.344777 4860 scope.go:117] "RemoveContainer" containerID="b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.358400 4860 scope.go:117] "RemoveContainer" containerID="677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.360399 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d599715-5ef5-4fca-adc9-9f1edad3be77" (UID: "4d599715-5ef5-4fca-adc9-9f1edad3be77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.374823 4860 scope.go:117] "RemoveContainer" containerID="fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c" Oct 14 14:52:48 crc kubenswrapper[4860]: E1014 14:52:48.375220 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c\": container with ID starting with fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c not found: ID does not exist" containerID="fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375265 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c"} err="failed to get container status \"fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c\": rpc error: code = NotFound desc = could not find container \"fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c\": container with ID starting with fb6cbdf734a85e371e2c5d50e0ffb513679fcb6b4952c3c73188f70da9d8b33c not found: ID does not exist" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375320 4860 scope.go:117] "RemoveContainer" containerID="b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375602 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375620 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d599715-5ef5-4fca-adc9-9f1edad3be77-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375631 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbmt\" (UniqueName: \"kubernetes.io/projected/4d599715-5ef5-4fca-adc9-9f1edad3be77-kube-api-access-5qbmt\") on node \"crc\" DevicePath \"\"" Oct 14 14:52:48 crc kubenswrapper[4860]: E1014 14:52:48.375642 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d\": container with ID starting with b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d not found: ID does not exist" containerID="b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375673 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d"} err="failed to get container status \"b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d\": rpc error: code = NotFound desc = could not find container \"b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d\": container with ID starting with b216436b53c1d2369d2a65075a8fd8c943f882e34b9b8cd187e7a5f0061a695d not found: ID does not exist" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375690 4860 scope.go:117] "RemoveContainer" containerID="677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1" Oct 14 14:52:48 crc kubenswrapper[4860]: E1014 14:52:48.375898 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1\": container with ID starting with 677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1 not found: ID does not exist" containerID="677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.375922 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1"} err="failed to get container status \"677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1\": rpc error: code = NotFound desc = could not find container \"677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1\": container with ID starting with 677aabb898484b7811147ba4b1acb21efa3cecee17ef75b9c588cc2eb5bdd6d1 not found: ID does not exist" Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.673087 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-npmpc"] Oct 14 14:52:48 crc kubenswrapper[4860]: I1014 14:52:48.675953 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-npmpc"] Oct 14 14:52:49 crc kubenswrapper[4860]: I1014 14:52:49.067654 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" path="/var/lib/kubelet/pods/4d599715-5ef5-4fca-adc9-9f1edad3be77/volumes" Oct 14 14:52:49 crc kubenswrapper[4860]: I1014 14:52:49.339698 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xlzj"] Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.363508 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerName="oauth-openshift" containerID="cri-o://7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381" gracePeriod=15 Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.706892 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.711684 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-trusted-ca-bundle\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.712663 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.713006 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-session\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.713830 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-provider-selection\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.713902 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-router-certs\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.713931 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-serving-cert\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714285 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-dir\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714315 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-login\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714357 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-ocp-branding-template\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714390 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-policies\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714427 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-cliconfig\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714457 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-error\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714484 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wzsc\" (UniqueName: \"kubernetes.io/projected/1271b3e0-b6e9-45cf-a267-ab013c556fc6-kube-api-access-6wzsc\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714510 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-idp-0-file-data\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.714542 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-service-ca\") pod \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\" (UID: \"1271b3e0-b6e9-45cf-a267-ab013c556fc6\") " Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.715016 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.715040 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.715076 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.715437 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.718859 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.719519 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.719648 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.720035 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.723104 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.723287 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.723872 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.725320 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.727063 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.727331 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1271b3e0-b6e9-45cf-a267-ab013c556fc6-kube-api-access-6wzsc" (OuterVolumeSpecName: "kube-api-access-6wzsc") pod "1271b3e0-b6e9-45cf-a267-ab013c556fc6" (UID: "1271b3e0-b6e9-45cf-a267-ab013c556fc6"). InnerVolumeSpecName "kube-api-access-6wzsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.759779 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-n997h"] Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.760152 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.760281 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.760371 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.760434 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.760490 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.760551 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.760621 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1d505b-82c9-4c08-9c49-95c9bcde1d03" containerName="pruner" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.760672 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1d505b-82c9-4c08-9c49-95c9bcde1d03" containerName="pruner" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.760728 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.760797 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.760884 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.761010 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.761173 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.761297 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.761510 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.761579 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.761666 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.761814 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.761919 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.762017 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.762179 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.762453 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.762722 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.762783 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="extract-content" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.762840 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.762898 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="extract-utilities" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.762954 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552fe4d5-540a-4b65-9105-013cb46c4abc" containerName="pruner" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763011 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="552fe4d5-540a-4b65-9105-013cb46c4abc" containerName="pruner" Oct 14 14:53:14 crc kubenswrapper[4860]: E1014 14:53:14.763101 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerName="oauth-openshift" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerName="oauth-openshift" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763326 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4ed01a-ec4e-41b6-90be-b246b51da828" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763389 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerName="oauth-openshift" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763495 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1d505b-82c9-4c08-9c49-95c9bcde1d03" containerName="pruner" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763578 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="552fe4d5-540a-4b65-9105-013cb46c4abc" containerName="pruner" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763636 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="904d68d4-d22d-483b-9fac-9fb2db95898f" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763825 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d599715-5ef5-4fca-adc9-9f1edad3be77" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.763928 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fb0dec-1ed9-4a83-8a55-4f229f200cf8" containerName="registry-server" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.764367 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-n997h"] Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.764491 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816315 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xww28\" (UniqueName: \"kubernetes.io/projected/77cdf7e8-dacb-46fd-9393-25d5e36e079e-kube-api-access-xww28\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816399 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816426 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816563 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-audit-policies\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816601 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816684 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816747 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816779 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816810 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816866 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816935 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.816973 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77cdf7e8-dacb-46fd-9393-25d5e36e079e-audit-dir\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817216 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817250 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817262 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817289 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wzsc\" (UniqueName: \"kubernetes.io/projected/1271b3e0-b6e9-45cf-a267-ab013c556fc6-kube-api-access-6wzsc\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817300 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817310 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817319 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817331 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817340 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817367 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817377 4860 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1271b3e0-b6e9-45cf-a267-ab013c556fc6-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817388 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.817398 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1271b3e0-b6e9-45cf-a267-ab013c556fc6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.918368 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.918456 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.918488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919113 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77cdf7e8-dacb-46fd-9393-25d5e36e079e-audit-dir\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919167 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xww28\" (UniqueName: \"kubernetes.io/projected/77cdf7e8-dacb-46fd-9393-25d5e36e079e-kube-api-access-xww28\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919255 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919280 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919278 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77cdf7e8-dacb-46fd-9393-25d5e36e079e-audit-dir\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919337 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-audit-policies\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919361 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919398 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919430 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919460 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.920280 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.920838 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-audit-policies\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.920839 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.919395 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.922328 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.922439 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.923338 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.923342 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.924460 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.924898 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.925434 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.928100 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77cdf7e8-dacb-46fd-9393-25d5e36e079e-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:14 crc kubenswrapper[4860]: I1014 14:53:14.940621 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xww28\" (UniqueName: \"kubernetes.io/projected/77cdf7e8-dacb-46fd-9393-25d5e36e079e-kube-api-access-xww28\") pod \"oauth-openshift-77df6bdc9c-n997h\" (UID: \"77cdf7e8-dacb-46fd-9393-25d5e36e079e\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.089841 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.461235 4860 generic.go:334] "Generic (PLEG): container finished" podID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" containerID="7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381" exitCode=0 Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.461284 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" event={"ID":"1271b3e0-b6e9-45cf-a267-ab013c556fc6","Type":"ContainerDied","Data":"7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381"} Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.461306 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.461330 4860 scope.go:117] "RemoveContainer" containerID="7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381" Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.461315 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xlzj" event={"ID":"1271b3e0-b6e9-45cf-a267-ab013c556fc6","Type":"ContainerDied","Data":"7e4c11a0af9dd66dd6f35a883116b88bca1c37bbe643bce9290fbe237cc80516"} Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.490894 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xlzj"] Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.490941 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xlzj"] Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.492621 4860 scope.go:117] "RemoveContainer" containerID="7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381" Oct 14 14:53:15 crc kubenswrapper[4860]: E1014 14:53:15.493945 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381\": container with ID starting with 7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381 not found: ID does not exist" containerID="7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381" Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.494003 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381"} err="failed to get container status \"7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381\": rpc error: code = NotFound desc = could not find container \"7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381\": container with ID starting with 7c55007a50846c0d7570879ff4288d976d3f2968987fa1a0c72f787738362381 not found: ID does not exist" Oct 14 14:53:15 crc kubenswrapper[4860]: I1014 14:53:15.519970 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-n997h"] Oct 14 14:53:15 crc kubenswrapper[4860]: W1014 14:53:15.520786 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77cdf7e8_dacb_46fd_9393_25d5e36e079e.slice/crio-d95ac1953ab3e4e11ea1a6bb36da963cb0d52d9c0068860527e1c687dabbe62c WatchSource:0}: Error finding container d95ac1953ab3e4e11ea1a6bb36da963cb0d52d9c0068860527e1c687dabbe62c: Status 404 returned error can't find the container with id d95ac1953ab3e4e11ea1a6bb36da963cb0d52d9c0068860527e1c687dabbe62c Oct 14 14:53:16 crc kubenswrapper[4860]: I1014 14:53:16.470203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" event={"ID":"77cdf7e8-dacb-46fd-9393-25d5e36e079e","Type":"ContainerStarted","Data":"854453b5999c82058a4159544526271e42b7b308e3808d8a5d01d189d24d3602"} Oct 14 14:53:16 crc kubenswrapper[4860]: I1014 14:53:16.471821 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" event={"ID":"77cdf7e8-dacb-46fd-9393-25d5e36e079e","Type":"ContainerStarted","Data":"d95ac1953ab3e4e11ea1a6bb36da963cb0d52d9c0068860527e1c687dabbe62c"} Oct 14 14:53:16 crc kubenswrapper[4860]: I1014 14:53:16.471957 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:16 crc kubenswrapper[4860]: I1014 14:53:16.480230 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" Oct 14 14:53:16 crc kubenswrapper[4860]: I1014 14:53:16.495772 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" podStartSLOduration=27.495751776 podStartE2EDuration="27.495751776s" podCreationTimestamp="2025-10-14 14:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:53:16.489474828 +0000 UTC m=+258.076258277" watchObservedRunningTime="2025-10-14 14:53:16.495751776 +0000 UTC m=+258.082535225" Oct 14 14:53:17 crc kubenswrapper[4860]: I1014 14:53:17.067980 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1271b3e0-b6e9-45cf-a267-ab013c556fc6" path="/var/lib/kubelet/pods/1271b3e0-b6e9-45cf-a267-ab013c556fc6/volumes" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.409089 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xb28n"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.410286 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xb28n" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="registry-server" containerID="cri-o://b4fc99a1cc9d8bb553a163a25266d8de9373ad4f315f164af6346b622ad44d36" gracePeriod=30 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.419231 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gd4lz"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.419740 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gd4lz" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="registry-server" containerID="cri-o://a118c560fb29fae5482cd392af1250900626746406d8eddc50df9ec8347b1214" gracePeriod=30 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.437523 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fml8s"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.437734 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" containerName="marketplace-operator" containerID="cri-o://5d955e0679096f2a789aafc75e490dcad3fbb98d8369dca8e2ce277b051cee20" gracePeriod=30 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.445216 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rbf"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.445435 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m7rbf" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="registry-server" containerID="cri-o://3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317" gracePeriod=30 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.454171 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ll2pt"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.454422 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ll2pt" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="registry-server" containerID="cri-o://f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae" gracePeriod=30 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.470367 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4tjxk"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.470949 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.481295 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4tjxk"] Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.563095 4860 generic.go:334] "Generic (PLEG): container finished" podID="91f0ff50-8025-417f-8349-bb7b79b04441" containerID="5d955e0679096f2a789aafc75e490dcad3fbb98d8369dca8e2ce277b051cee20" exitCode=0 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.563193 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" event={"ID":"91f0ff50-8025-417f-8349-bb7b79b04441","Type":"ContainerDied","Data":"5d955e0679096f2a789aafc75e490dcad3fbb98d8369dca8e2ce277b051cee20"} Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.568762 4860 generic.go:334] "Generic (PLEG): container finished" podID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerID="a118c560fb29fae5482cd392af1250900626746406d8eddc50df9ec8347b1214" exitCode=0 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.568828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerDied","Data":"a118c560fb29fae5482cd392af1250900626746406d8eddc50df9ec8347b1214"} Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.571915 4860 generic.go:334] "Generic (PLEG): container finished" podID="7872d916-7101-4078-a051-702427c0321f" containerID="b4fc99a1cc9d8bb553a163a25266d8de9373ad4f315f164af6346b622ad44d36" exitCode=0 Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.571947 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb28n" event={"ID":"7872d916-7101-4078-a051-702427c0321f","Type":"ContainerDied","Data":"b4fc99a1cc9d8bb553a163a25266d8de9373ad4f315f164af6346b622ad44d36"} Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.671590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jtzg\" (UniqueName: \"kubernetes.io/projected/4e88f73d-d331-4edf-903f-2930d09f8fd9-kube-api-access-8jtzg\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.671657 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e88f73d-d331-4edf-903f-2930d09f8fd9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.671696 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4e88f73d-d331-4edf-903f-2930d09f8fd9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.778795 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jtzg\" (UniqueName: \"kubernetes.io/projected/4e88f73d-d331-4edf-903f-2930d09f8fd9-kube-api-access-8jtzg\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.778852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e88f73d-d331-4edf-903f-2930d09f8fd9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.778886 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4e88f73d-d331-4edf-903f-2930d09f8fd9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.781627 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e88f73d-d331-4edf-903f-2930d09f8fd9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.795353 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4e88f73d-d331-4edf-903f-2930d09f8fd9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.817812 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jtzg\" (UniqueName: \"kubernetes.io/projected/4e88f73d-d331-4edf-903f-2930d09f8fd9-kube-api-access-8jtzg\") pod \"marketplace-operator-79b997595-4tjxk\" (UID: \"4e88f73d-d331-4edf-903f-2930d09f8fd9\") " pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.856505 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.880230 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-utilities\") pod \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.880326 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-catalog-content\") pod \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.880396 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f968n\" (UniqueName: \"kubernetes.io/projected/cf902d5c-75ec-4993-8a0f-2a188b2826e3-kube-api-access-f968n\") pod \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\" (UID: \"cf902d5c-75ec-4993-8a0f-2a188b2826e3\") " Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.881419 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-utilities" (OuterVolumeSpecName: "utilities") pod "cf902d5c-75ec-4993-8a0f-2a188b2826e3" (UID: "cf902d5c-75ec-4993-8a0f-2a188b2826e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.883893 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf902d5c-75ec-4993-8a0f-2a188b2826e3-kube-api-access-f968n" (OuterVolumeSpecName: "kube-api-access-f968n") pod "cf902d5c-75ec-4993-8a0f-2a188b2826e3" (UID: "cf902d5c-75ec-4993-8a0f-2a188b2826e3"). InnerVolumeSpecName "kube-api-access-f968n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.978322 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf902d5c-75ec-4993-8a0f-2a188b2826e3" (UID: "cf902d5c-75ec-4993-8a0f-2a188b2826e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.981731 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.981767 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf902d5c-75ec-4993-8a0f-2a188b2826e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:35 crc kubenswrapper[4860]: I1014 14:53:35.981780 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f968n\" (UniqueName: \"kubernetes.io/projected/cf902d5c-75ec-4993-8a0f-2a188b2826e3-kube-api-access-f968n\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.022185 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.028503 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.028726 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.039287 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.083097 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084497 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfv9n\" (UniqueName: \"kubernetes.io/projected/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-kube-api-access-sfv9n\") pod \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084552 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxkmm\" (UniqueName: \"kubernetes.io/projected/221c2ea0-2c26-436a-a2cd-f77091de581f-kube-api-access-sxkmm\") pod \"221c2ea0-2c26-436a-a2cd-f77091de581f\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084584 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-utilities\") pod \"221c2ea0-2c26-436a-a2cd-f77091de581f\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084603 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-trusted-ca\") pod \"91f0ff50-8025-417f-8349-bb7b79b04441\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084627 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-catalog-content\") pod \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084677 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-operator-metrics\") pod \"91f0ff50-8025-417f-8349-bb7b79b04441\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084739 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-catalog-content\") pod \"221c2ea0-2c26-436a-a2cd-f77091de581f\" (UID: \"221c2ea0-2c26-436a-a2cd-f77091de581f\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084756 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-utilities\") pod \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\" (UID: \"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.084777 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bjfq\" (UniqueName: \"kubernetes.io/projected/91f0ff50-8025-417f-8349-bb7b79b04441-kube-api-access-7bjfq\") pod \"91f0ff50-8025-417f-8349-bb7b79b04441\" (UID: \"91f0ff50-8025-417f-8349-bb7b79b04441\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.087847 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-kube-api-access-sfv9n" (OuterVolumeSpecName: "kube-api-access-sfv9n") pod "8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" (UID: "8da0ebd6-f5ac-4668-ada7-f71605ae4c4a"). InnerVolumeSpecName "kube-api-access-sfv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.089464 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "91f0ff50-8025-417f-8349-bb7b79b04441" (UID: "91f0ff50-8025-417f-8349-bb7b79b04441"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.090559 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-utilities" (OuterVolumeSpecName: "utilities") pod "221c2ea0-2c26-436a-a2cd-f77091de581f" (UID: "221c2ea0-2c26-436a-a2cd-f77091de581f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.092791 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-utilities" (OuterVolumeSpecName: "utilities") pod "8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" (UID: "8da0ebd6-f5ac-4668-ada7-f71605ae4c4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.101349 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "91f0ff50-8025-417f-8349-bb7b79b04441" (UID: "91f0ff50-8025-417f-8349-bb7b79b04441"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.101463 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221c2ea0-2c26-436a-a2cd-f77091de581f-kube-api-access-sxkmm" (OuterVolumeSpecName: "kube-api-access-sxkmm") pod "221c2ea0-2c26-436a-a2cd-f77091de581f" (UID: "221c2ea0-2c26-436a-a2cd-f77091de581f"). InnerVolumeSpecName "kube-api-access-sxkmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.108003 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f0ff50-8025-417f-8349-bb7b79b04441-kube-api-access-7bjfq" (OuterVolumeSpecName: "kube-api-access-7bjfq") pod "91f0ff50-8025-417f-8349-bb7b79b04441" (UID: "91f0ff50-8025-417f-8349-bb7b79b04441"). InnerVolumeSpecName "kube-api-access-7bjfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.144588 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" (UID: "8da0ebd6-f5ac-4668-ada7-f71605ae4c4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186307 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvcbh\" (UniqueName: \"kubernetes.io/projected/7872d916-7101-4078-a051-702427c0321f-kube-api-access-bvcbh\") pod \"7872d916-7101-4078-a051-702427c0321f\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186480 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-catalog-content\") pod \"7872d916-7101-4078-a051-702427c0321f\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186562 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-utilities\") pod \"7872d916-7101-4078-a051-702427c0321f\" (UID: \"7872d916-7101-4078-a051-702427c0321f\") " Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186796 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186809 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bjfq\" (UniqueName: \"kubernetes.io/projected/91f0ff50-8025-417f-8349-bb7b79b04441-kube-api-access-7bjfq\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186819 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfv9n\" (UniqueName: \"kubernetes.io/projected/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-kube-api-access-sfv9n\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186827 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxkmm\" (UniqueName: \"kubernetes.io/projected/221c2ea0-2c26-436a-a2cd-f77091de581f-kube-api-access-sxkmm\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186836 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186862 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186873 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.186883 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91f0ff50-8025-417f-8349-bb7b79b04441-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.187564 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-utilities" (OuterVolumeSpecName: "utilities") pod "7872d916-7101-4078-a051-702427c0321f" (UID: "7872d916-7101-4078-a051-702427c0321f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.189753 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7872d916-7101-4078-a051-702427c0321f-kube-api-access-bvcbh" (OuterVolumeSpecName: "kube-api-access-bvcbh") pod "7872d916-7101-4078-a051-702427c0321f" (UID: "7872d916-7101-4078-a051-702427c0321f"). InnerVolumeSpecName "kube-api-access-bvcbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.240801 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7872d916-7101-4078-a051-702427c0321f" (UID: "7872d916-7101-4078-a051-702427c0321f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.244469 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "221c2ea0-2c26-436a-a2cd-f77091de581f" (UID: "221c2ea0-2c26-436a-a2cd-f77091de581f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.288382 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvcbh\" (UniqueName: \"kubernetes.io/projected/7872d916-7101-4078-a051-702427c0321f-kube-api-access-bvcbh\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.288409 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.288418 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221c2ea0-2c26-436a-a2cd-f77091de581f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.288426 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872d916-7101-4078-a051-702427c0321f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.517686 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4tjxk"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.580526 4860 generic.go:334] "Generic (PLEG): container finished" podID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerID="f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae" exitCode=0 Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.580597 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerDied","Data":"f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.580600 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ll2pt" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.580625 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ll2pt" event={"ID":"221c2ea0-2c26-436a-a2cd-f77091de581f","Type":"ContainerDied","Data":"6cb47c50cc1918172218d9fd24ec33c82cfe405819b1c88e5013bc353670808d"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.580644 4860 scope.go:117] "RemoveContainer" containerID="f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.584900 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd4lz" event={"ID":"cf902d5c-75ec-4993-8a0f-2a188b2826e3","Type":"ContainerDied","Data":"2fb581a7946a97829584f6295bc5aea1fe2a5a51892b8d1781226c377dba2a1a"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.584986 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd4lz" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.596258 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb28n" event={"ID":"7872d916-7101-4078-a051-702427c0321f","Type":"ContainerDied","Data":"9dd9b2014eebf930ba3a86e71b918dfb7ce3fc94f1238730f2c3d20d5f8a09ef"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.596277 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb28n" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.613262 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" event={"ID":"91f0ff50-8025-417f-8349-bb7b79b04441","Type":"ContainerDied","Data":"ea13c0b7b22c724c00adab3ab431c8f8fbdf9fee706dd30372e3c288f72e7390"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.613451 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fml8s" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.635784 4860 scope.go:117] "RemoveContainer" containerID="605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.635977 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rbf" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.635864 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rbf" event={"ID":"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a","Type":"ContainerDied","Data":"3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.635873 4860 generic.go:334] "Generic (PLEG): container finished" podID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerID="3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317" exitCode=0 Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.636665 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rbf" event={"ID":"8da0ebd6-f5ac-4668-ada7-f71605ae4c4a","Type":"ContainerDied","Data":"547f7b78e1b4a48615c61262110b778d70b0dfd0c171d0635362cba031ecd752"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.642757 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" event={"ID":"4e88f73d-d331-4edf-903f-2930d09f8fd9","Type":"ContainerStarted","Data":"d8d74ae5f0e690772621e22e1e71e11a791c9c2f1f0857ac390ca92d28fcf1c9"} Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.676272 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xb28n"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.677406 4860 scope.go:117] "RemoveContainer" containerID="0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.678926 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xb28n"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.698714 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ll2pt"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.705430 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ll2pt"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.719943 4860 scope.go:117] "RemoveContainer" containerID="f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae" Oct 14 14:53:36 crc kubenswrapper[4860]: E1014 14:53:36.720953 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae\": container with ID starting with f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae not found: ID does not exist" containerID="f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.721089 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae"} err="failed to get container status \"f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae\": rpc error: code = NotFound desc = could not find container \"f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae\": container with ID starting with f7e0b75cde6b3a26d22e3fbc190e42569bb7976f0061f9e5b256e2fe7083e3ae not found: ID does not exist" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.721185 4860 scope.go:117] "RemoveContainer" containerID="605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a" Oct 14 14:53:36 crc kubenswrapper[4860]: E1014 14:53:36.721933 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a\": container with ID starting with 605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a not found: ID does not exist" containerID="605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.722092 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a"} err="failed to get container status \"605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a\": rpc error: code = NotFound desc = could not find container \"605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a\": container with ID starting with 605dad3235c4f11b6ad5b9daf77469a709f21f75c780398b127138eeea4f630a not found: ID does not exist" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.722165 4860 scope.go:117] "RemoveContainer" containerID="0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0" Oct 14 14:53:36 crc kubenswrapper[4860]: E1014 14:53:36.723482 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0\": container with ID starting with 0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0 not found: ID does not exist" containerID="0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.723525 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0"} err="failed to get container status \"0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0\": rpc error: code = NotFound desc = could not find container \"0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0\": container with ID starting with 0bf9200a3b2538c3577e0a46e8a0589ca49cece230b1c9674730c6f6759517e0 not found: ID does not exist" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.723550 4860 scope.go:117] "RemoveContainer" containerID="a118c560fb29fae5482cd392af1250900626746406d8eddc50df9ec8347b1214" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.723652 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gd4lz"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.727545 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gd4lz"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.730005 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fml8s"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.732675 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fml8s"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.746417 4860 scope.go:117] "RemoveContainer" containerID="7b6d4d3af929866caa6bae2db2afb9cb920d3570cc931dcf188c85ba665bd400" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.748995 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rbf"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.751528 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rbf"] Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.759723 4860 scope.go:117] "RemoveContainer" containerID="7e87efdb49d7b69fd2cfd64c491de3d2947beee774bd06e6fa562dd1faac7860" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.772725 4860 scope.go:117] "RemoveContainer" containerID="b4fc99a1cc9d8bb553a163a25266d8de9373ad4f315f164af6346b622ad44d36" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.786873 4860 scope.go:117] "RemoveContainer" containerID="0fca09d86c13eb607eef613355ae2ae8f43ca47550c8e067ce516200e8331600" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.808741 4860 scope.go:117] "RemoveContainer" containerID="c1656a4995c8107d4bfccd69a75d744bd3840c514fd385c414c77f0e71c3a9f1" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.830101 4860 scope.go:117] "RemoveContainer" containerID="5d955e0679096f2a789aafc75e490dcad3fbb98d8369dca8e2ce277b051cee20" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.859231 4860 scope.go:117] "RemoveContainer" containerID="3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.871132 4860 scope.go:117] "RemoveContainer" containerID="b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.884898 4860 scope.go:117] "RemoveContainer" containerID="bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.897084 4860 scope.go:117] "RemoveContainer" containerID="3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317" Oct 14 14:53:36 crc kubenswrapper[4860]: E1014 14:53:36.898777 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317\": container with ID starting with 3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317 not found: ID does not exist" containerID="3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.898832 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317"} err="failed to get container status \"3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317\": rpc error: code = NotFound desc = could not find container \"3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317\": container with ID starting with 3e3d95020b72a1d5b660d6435ece25d30184b3b1ab3c783d09d92b303092e317 not found: ID does not exist" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.898863 4860 scope.go:117] "RemoveContainer" containerID="b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c" Oct 14 14:53:36 crc kubenswrapper[4860]: E1014 14:53:36.899402 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c\": container with ID starting with b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c not found: ID does not exist" containerID="b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.899430 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c"} err="failed to get container status \"b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c\": rpc error: code = NotFound desc = could not find container \"b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c\": container with ID starting with b20890da7851bb722c15725cb6c0acbd2374247782c9ff0ed019338c992bb98c not found: ID does not exist" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.900019 4860 scope.go:117] "RemoveContainer" containerID="bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70" Oct 14 14:53:36 crc kubenswrapper[4860]: E1014 14:53:36.900627 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70\": container with ID starting with bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70 not found: ID does not exist" containerID="bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70" Oct 14 14:53:36 crc kubenswrapper[4860]: I1014 14:53:36.900659 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70"} err="failed to get container status \"bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70\": rpc error: code = NotFound desc = could not find container \"bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70\": container with ID starting with bdbf9d98bfab643afa51d94edf28e91fb1f3bd7ef681121ade57975622cc0e70 not found: ID does not exist" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.070289 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" path="/var/lib/kubelet/pods/221c2ea0-2c26-436a-a2cd-f77091de581f/volumes" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.071023 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7872d916-7101-4078-a051-702427c0321f" path="/var/lib/kubelet/pods/7872d916-7101-4078-a051-702427c0321f/volumes" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.071701 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" path="/var/lib/kubelet/pods/8da0ebd6-f5ac-4668-ada7-f71605ae4c4a/volumes" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.073059 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" path="/var/lib/kubelet/pods/91f0ff50-8025-417f-8349-bb7b79b04441/volumes" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.073625 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" path="/var/lib/kubelet/pods/cf902d5c-75ec-4993-8a0f-2a188b2826e3/volumes" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.623906 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pkgfs"] Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624677 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" containerName="marketplace-operator" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624694 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" containerName="marketplace-operator" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624734 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624741 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624750 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624756 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624767 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624772 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624781 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624864 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624893 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624903 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624914 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624953 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624967 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624974 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.624989 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.624996 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="extract-content" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.625005 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625040 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.625051 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625057 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="extract-utilities" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.625065 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625070 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: E1014 14:53:37.625078 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625083 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625171 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7872d916-7101-4078-a051-702427c0321f" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625179 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf902d5c-75ec-4993-8a0f-2a188b2826e3" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625186 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f0ff50-8025-417f-8349-bb7b79b04441" containerName="marketplace-operator" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625196 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da0ebd6-f5ac-4668-ada7-f71605ae4c4a" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.625205 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="221c2ea0-2c26-436a-a2cd-f77091de581f" containerName="registry-server" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.627040 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.628631 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.636933 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkgfs"] Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.661696 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" event={"ID":"4e88f73d-d331-4edf-903f-2930d09f8fd9","Type":"ContainerStarted","Data":"3f02cbac46c6fbfef1738be06c703e97998277ef0822c5c74409bff4237ee9d8"} Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.661957 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.665206 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.682405 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4tjxk" podStartSLOduration=2.682383469 podStartE2EDuration="2.682383469s" podCreationTimestamp="2025-10-14 14:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:53:37.674666926 +0000 UTC m=+279.261450385" watchObservedRunningTime="2025-10-14 14:53:37.682383469 +0000 UTC m=+279.269166918" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.714406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57e8b6-5f6f-42fb-a3c2-53567553c663-utilities\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.714475 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57e8b6-5f6f-42fb-a3c2-53567553c663-catalog-content\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.714630 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xnj\" (UniqueName: \"kubernetes.io/projected/8b57e8b6-5f6f-42fb-a3c2-53567553c663-kube-api-access-p2xnj\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.815465 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57e8b6-5f6f-42fb-a3c2-53567553c663-utilities\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.815515 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57e8b6-5f6f-42fb-a3c2-53567553c663-catalog-content\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.815545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xnj\" (UniqueName: \"kubernetes.io/projected/8b57e8b6-5f6f-42fb-a3c2-53567553c663-kube-api-access-p2xnj\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.816051 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57e8b6-5f6f-42fb-a3c2-53567553c663-utilities\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.816100 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57e8b6-5f6f-42fb-a3c2-53567553c663-catalog-content\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.820860 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lsrz4"] Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.821772 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.824106 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.831565 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsrz4"] Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.840961 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xnj\" (UniqueName: \"kubernetes.io/projected/8b57e8b6-5f6f-42fb-a3c2-53567553c663-kube-api-access-p2xnj\") pod \"redhat-marketplace-pkgfs\" (UID: \"8b57e8b6-5f6f-42fb-a3c2-53567553c663\") " pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.916513 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8th7\" (UniqueName: \"kubernetes.io/projected/f5699bb2-6633-43ac-9d64-3b83f3471e4d-kube-api-access-k8th7\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.916574 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5699bb2-6633-43ac-9d64-3b83f3471e4d-utilities\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.916599 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5699bb2-6633-43ac-9d64-3b83f3471e4d-catalog-content\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:37 crc kubenswrapper[4860]: I1014 14:53:37.954395 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.017387 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8th7\" (UniqueName: \"kubernetes.io/projected/f5699bb2-6633-43ac-9d64-3b83f3471e4d-kube-api-access-k8th7\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.017451 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5699bb2-6633-43ac-9d64-3b83f3471e4d-utilities\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.017477 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5699bb2-6633-43ac-9d64-3b83f3471e4d-catalog-content\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.017874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5699bb2-6633-43ac-9d64-3b83f3471e4d-catalog-content\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.018139 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5699bb2-6633-43ac-9d64-3b83f3471e4d-utilities\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.038325 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8th7\" (UniqueName: \"kubernetes.io/projected/f5699bb2-6633-43ac-9d64-3b83f3471e4d-kube-api-access-k8th7\") pod \"redhat-operators-lsrz4\" (UID: \"f5699bb2-6633-43ac-9d64-3b83f3471e4d\") " pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.157935 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.333576 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkgfs"] Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.534199 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsrz4"] Oct 14 14:53:38 crc kubenswrapper[4860]: W1014 14:53:38.542753 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5699bb2_6633_43ac_9d64_3b83f3471e4d.slice/crio-68a505dd70bcd5491de843154d653f32740369586eda17913e5cd80175f794cd WatchSource:0}: Error finding container 68a505dd70bcd5491de843154d653f32740369586eda17913e5cd80175f794cd: Status 404 returned error can't find the container with id 68a505dd70bcd5491de843154d653f32740369586eda17913e5cd80175f794cd Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.672547 4860 generic.go:334] "Generic (PLEG): container finished" podID="8b57e8b6-5f6f-42fb-a3c2-53567553c663" containerID="12a7713a2ffdc4c51ac4f2d41fee6d5027c9916526ebb369a420c6163871b7fe" exitCode=0 Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.672836 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkgfs" event={"ID":"8b57e8b6-5f6f-42fb-a3c2-53567553c663","Type":"ContainerDied","Data":"12a7713a2ffdc4c51ac4f2d41fee6d5027c9916526ebb369a420c6163871b7fe"} Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.672860 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkgfs" event={"ID":"8b57e8b6-5f6f-42fb-a3c2-53567553c663","Type":"ContainerStarted","Data":"29050f33385f5a43f5a1fd004b690de5bbf46ce16e65d6ea811feb407dbd0542"} Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.678704 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsrz4" event={"ID":"f5699bb2-6633-43ac-9d64-3b83f3471e4d","Type":"ContainerStarted","Data":"7857771cad1172b0cef37ab0b10e41dfd800a9ec8c37932fdac38c4862d260e4"} Oct 14 14:53:38 crc kubenswrapper[4860]: I1014 14:53:38.678750 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsrz4" event={"ID":"f5699bb2-6633-43ac-9d64-3b83f3471e4d","Type":"ContainerStarted","Data":"68a505dd70bcd5491de843154d653f32740369586eda17913e5cd80175f794cd"} Oct 14 14:53:39 crc kubenswrapper[4860]: I1014 14:53:39.684092 4860 generic.go:334] "Generic (PLEG): container finished" podID="f5699bb2-6633-43ac-9d64-3b83f3471e4d" containerID="7857771cad1172b0cef37ab0b10e41dfd800a9ec8c37932fdac38c4862d260e4" exitCode=0 Oct 14 14:53:39 crc kubenswrapper[4860]: I1014 14:53:39.684163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsrz4" event={"ID":"f5699bb2-6633-43ac-9d64-3b83f3471e4d","Type":"ContainerDied","Data":"7857771cad1172b0cef37ab0b10e41dfd800a9ec8c37932fdac38c4862d260e4"} Oct 14 14:53:39 crc kubenswrapper[4860]: I1014 14:53:39.684198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsrz4" event={"ID":"f5699bb2-6633-43ac-9d64-3b83f3471e4d","Type":"ContainerStarted","Data":"44fa30e52934fd2fd480702584e97b182ef7307180c86492eeed475ee107e0b8"} Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.024722 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vt7nl"] Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.026323 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.028276 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.038857 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vt7nl"] Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.043434 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xx42\" (UniqueName: \"kubernetes.io/projected/699e6482-e421-4a0a-b00e-8378366000ba-kube-api-access-4xx42\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.043469 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-catalog-content\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.043493 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-utilities\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.145020 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xx42\" (UniqueName: \"kubernetes.io/projected/699e6482-e421-4a0a-b00e-8378366000ba-kube-api-access-4xx42\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.145410 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-catalog-content\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.145437 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-utilities\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.145838 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-catalog-content\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.145932 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-utilities\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.164092 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xx42\" (UniqueName: \"kubernetes.io/projected/699e6482-e421-4a0a-b00e-8378366000ba-kube-api-access-4xx42\") pod \"certified-operators-vt7nl\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.222456 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m5zb9"] Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.223470 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.225606 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.230843 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5zb9"] Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.246592 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06644532-4731-4669-9d9f-c26cfa66a0de-utilities\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.246657 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06644532-4731-4669-9d9f-c26cfa66a0de-catalog-content\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.246723 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpz84\" (UniqueName: \"kubernetes.io/projected/06644532-4731-4669-9d9f-c26cfa66a0de-kube-api-access-bpz84\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.347291 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.347384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpz84\" (UniqueName: \"kubernetes.io/projected/06644532-4731-4669-9d9f-c26cfa66a0de-kube-api-access-bpz84\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.347455 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06644532-4731-4669-9d9f-c26cfa66a0de-utilities\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.347478 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06644532-4731-4669-9d9f-c26cfa66a0de-catalog-content\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.347939 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06644532-4731-4669-9d9f-c26cfa66a0de-catalog-content\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.348057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06644532-4731-4669-9d9f-c26cfa66a0de-utilities\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.366108 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpz84\" (UniqueName: \"kubernetes.io/projected/06644532-4731-4669-9d9f-c26cfa66a0de-kube-api-access-bpz84\") pod \"community-operators-m5zb9\" (UID: \"06644532-4731-4669-9d9f-c26cfa66a0de\") " pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.546350 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.693285 4860 generic.go:334] "Generic (PLEG): container finished" podID="8b57e8b6-5f6f-42fb-a3c2-53567553c663" containerID="b79a890f45bceb30e0f0d7201e98b99e6dd7b24e48b8405d807f58d18cd26fc6" exitCode=0 Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.693518 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkgfs" event={"ID":"8b57e8b6-5f6f-42fb-a3c2-53567553c663","Type":"ContainerDied","Data":"b79a890f45bceb30e0f0d7201e98b99e6dd7b24e48b8405d807f58d18cd26fc6"} Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.700601 4860 generic.go:334] "Generic (PLEG): container finished" podID="f5699bb2-6633-43ac-9d64-3b83f3471e4d" containerID="44fa30e52934fd2fd480702584e97b182ef7307180c86492eeed475ee107e0b8" exitCode=0 Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.700636 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsrz4" event={"ID":"f5699bb2-6633-43ac-9d64-3b83f3471e4d","Type":"ContainerDied","Data":"44fa30e52934fd2fd480702584e97b182ef7307180c86492eeed475ee107e0b8"} Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.769331 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vt7nl"] Oct 14 14:53:40 crc kubenswrapper[4860]: W1014 14:53:40.775259 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod699e6482_e421_4a0a_b00e_8378366000ba.slice/crio-be7339d30bf28bb05de2b2276e4ee58a7bb0da3adaf68cd770f3fd919e2879d5 WatchSource:0}: Error finding container be7339d30bf28bb05de2b2276e4ee58a7bb0da3adaf68cd770f3fd919e2879d5: Status 404 returned error can't find the container with id be7339d30bf28bb05de2b2276e4ee58a7bb0da3adaf68cd770f3fd919e2879d5 Oct 14 14:53:40 crc kubenswrapper[4860]: I1014 14:53:40.931190 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5zb9"] Oct 14 14:53:40 crc kubenswrapper[4860]: W1014 14:53:40.998010 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06644532_4731_4669_9d9f_c26cfa66a0de.slice/crio-ed7223605b3235d12f8fba6235d046f327387a7b7c3e02b0793a88c97319fe74 WatchSource:0}: Error finding container ed7223605b3235d12f8fba6235d046f327387a7b7c3e02b0793a88c97319fe74: Status 404 returned error can't find the container with id ed7223605b3235d12f8fba6235d046f327387a7b7c3e02b0793a88c97319fe74 Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.707815 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsrz4" event={"ID":"f5699bb2-6633-43ac-9d64-3b83f3471e4d","Type":"ContainerStarted","Data":"5e7464a0f1d62bb9efc3e167fdc2dea4ba3da388ce38808bf0bae4a9a467c609"} Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.710553 4860 generic.go:334] "Generic (PLEG): container finished" podID="699e6482-e421-4a0a-b00e-8378366000ba" containerID="25f52b080dff04f223099e23f8661a1aef466397fa3584c9c87df7a3b35fbd06" exitCode=0 Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.710771 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt7nl" event={"ID":"699e6482-e421-4a0a-b00e-8378366000ba","Type":"ContainerDied","Data":"25f52b080dff04f223099e23f8661a1aef466397fa3584c9c87df7a3b35fbd06"} Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.710889 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt7nl" event={"ID":"699e6482-e421-4a0a-b00e-8378366000ba","Type":"ContainerStarted","Data":"be7339d30bf28bb05de2b2276e4ee58a7bb0da3adaf68cd770f3fd919e2879d5"} Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.711865 4860 generic.go:334] "Generic (PLEG): container finished" podID="06644532-4731-4669-9d9f-c26cfa66a0de" containerID="63e1e768b60b2a370df7ff05aaf57aa71075cee99687569904568476e2456904" exitCode=0 Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.711902 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5zb9" event={"ID":"06644532-4731-4669-9d9f-c26cfa66a0de","Type":"ContainerDied","Data":"63e1e768b60b2a370df7ff05aaf57aa71075cee99687569904568476e2456904"} Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.711917 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5zb9" event={"ID":"06644532-4731-4669-9d9f-c26cfa66a0de","Type":"ContainerStarted","Data":"ed7223605b3235d12f8fba6235d046f327387a7b7c3e02b0793a88c97319fe74"} Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.715493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkgfs" event={"ID":"8b57e8b6-5f6f-42fb-a3c2-53567553c663","Type":"ContainerStarted","Data":"3f27d59ae929173a6a089341cd34e428ebcb4563282cb8a24f9a6ca1e5900aed"} Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.728777 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lsrz4" podStartSLOduration=2.076187925 podStartE2EDuration="4.72876161s" podCreationTimestamp="2025-10-14 14:53:37 +0000 UTC" firstStartedPulling="2025-10-14 14:53:38.679930825 +0000 UTC m=+280.266714274" lastFinishedPulling="2025-10-14 14:53:41.33250451 +0000 UTC m=+282.919287959" observedRunningTime="2025-10-14 14:53:41.724527253 +0000 UTC m=+283.311310702" watchObservedRunningTime="2025-10-14 14:53:41.72876161 +0000 UTC m=+283.315545059" Oct 14 14:53:41 crc kubenswrapper[4860]: I1014 14:53:41.757741 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pkgfs" podStartSLOduration=2.089456265 podStartE2EDuration="4.757722665s" podCreationTimestamp="2025-10-14 14:53:37 +0000 UTC" firstStartedPulling="2025-10-14 14:53:38.673869552 +0000 UTC m=+280.260653001" lastFinishedPulling="2025-10-14 14:53:41.342135952 +0000 UTC m=+282.928919401" observedRunningTime="2025-10-14 14:53:41.755226663 +0000 UTC m=+283.342010122" watchObservedRunningTime="2025-10-14 14:53:41.757722665 +0000 UTC m=+283.344506124" Oct 14 14:53:42 crc kubenswrapper[4860]: I1014 14:53:42.721273 4860 generic.go:334] "Generic (PLEG): container finished" podID="699e6482-e421-4a0a-b00e-8378366000ba" containerID="82b4b215cb77229247ea9d46fd7e74ad03d52523959246e089e69a3d855f5e27" exitCode=0 Oct 14 14:53:42 crc kubenswrapper[4860]: I1014 14:53:42.721330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt7nl" event={"ID":"699e6482-e421-4a0a-b00e-8378366000ba","Type":"ContainerDied","Data":"82b4b215cb77229247ea9d46fd7e74ad03d52523959246e089e69a3d855f5e27"} Oct 14 14:53:42 crc kubenswrapper[4860]: I1014 14:53:42.726146 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5zb9" event={"ID":"06644532-4731-4669-9d9f-c26cfa66a0de","Type":"ContainerStarted","Data":"41f5a54c6889c38b0df2a65af21df9428a50dfe5d281dbfc68ace9849670d70b"} Oct 14 14:53:43 crc kubenswrapper[4860]: I1014 14:53:43.742047 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt7nl" event={"ID":"699e6482-e421-4a0a-b00e-8378366000ba","Type":"ContainerStarted","Data":"2f373eab8746489647fb05711af93ac47c036366ced508e76678947911e58ac0"} Oct 14 14:53:43 crc kubenswrapper[4860]: I1014 14:53:43.744574 4860 generic.go:334] "Generic (PLEG): container finished" podID="06644532-4731-4669-9d9f-c26cfa66a0de" containerID="41f5a54c6889c38b0df2a65af21df9428a50dfe5d281dbfc68ace9849670d70b" exitCode=0 Oct 14 14:53:43 crc kubenswrapper[4860]: I1014 14:53:43.744617 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5zb9" event={"ID":"06644532-4731-4669-9d9f-c26cfa66a0de","Type":"ContainerDied","Data":"41f5a54c6889c38b0df2a65af21df9428a50dfe5d281dbfc68ace9849670d70b"} Oct 14 14:53:43 crc kubenswrapper[4860]: I1014 14:53:43.761425 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vt7nl" podStartSLOduration=2.335764459 podStartE2EDuration="3.761408962s" podCreationTimestamp="2025-10-14 14:53:40 +0000 UTC" firstStartedPulling="2025-10-14 14:53:41.712464501 +0000 UTC m=+283.299247950" lastFinishedPulling="2025-10-14 14:53:43.138109004 +0000 UTC m=+284.724892453" observedRunningTime="2025-10-14 14:53:43.759451923 +0000 UTC m=+285.346235392" watchObservedRunningTime="2025-10-14 14:53:43.761408962 +0000 UTC m=+285.348192411" Oct 14 14:53:45 crc kubenswrapper[4860]: I1014 14:53:45.756159 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5zb9" event={"ID":"06644532-4731-4669-9d9f-c26cfa66a0de","Type":"ContainerStarted","Data":"9d8d6a60ca363a4eed6d26573bebceaf9d3c33688036dd13181100d130a1d480"} Oct 14 14:53:45 crc kubenswrapper[4860]: I1014 14:53:45.784456 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m5zb9" podStartSLOduration=3.171735817 podStartE2EDuration="5.784438464s" podCreationTimestamp="2025-10-14 14:53:40 +0000 UTC" firstStartedPulling="2025-10-14 14:53:41.714296707 +0000 UTC m=+283.301080156" lastFinishedPulling="2025-10-14 14:53:44.326999354 +0000 UTC m=+285.913782803" observedRunningTime="2025-10-14 14:53:45.782670489 +0000 UTC m=+287.369453948" watchObservedRunningTime="2025-10-14 14:53:45.784438464 +0000 UTC m=+287.371221913" Oct 14 14:53:47 crc kubenswrapper[4860]: I1014 14:53:47.954775 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:47 crc kubenswrapper[4860]: I1014 14:53:47.955147 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:48 crc kubenswrapper[4860]: I1014 14:53:48.009468 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:48 crc kubenswrapper[4860]: I1014 14:53:48.158897 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:48 crc kubenswrapper[4860]: I1014 14:53:48.159102 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:48 crc kubenswrapper[4860]: I1014 14:53:48.194521 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:48 crc kubenswrapper[4860]: I1014 14:53:48.817072 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pkgfs" Oct 14 14:53:48 crc kubenswrapper[4860]: I1014 14:53:48.826219 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lsrz4" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.347571 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.347667 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.392496 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.547215 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.547560 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.580308 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.814801 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 14:53:50 crc kubenswrapper[4860]: I1014 14:53:50.822423 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m5zb9" Oct 14 14:54:29 crc kubenswrapper[4860]: I1014 14:54:29.245305 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:54:29 crc kubenswrapper[4860]: I1014 14:54:29.245907 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:54:59 crc kubenswrapper[4860]: I1014 14:54:59.245235 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:54:59 crc kubenswrapper[4860]: I1014 14:54:59.246883 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:55:29 crc kubenswrapper[4860]: I1014 14:55:29.244991 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:55:29 crc kubenswrapper[4860]: I1014 14:55:29.245561 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:55:29 crc kubenswrapper[4860]: I1014 14:55:29.245618 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:55:29 crc kubenswrapper[4860]: I1014 14:55:29.246261 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25f232e83add52308352a0c71839405001f25fe7657f02b1bf05e81be7c47a92"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:55:29 crc kubenswrapper[4860]: I1014 14:55:29.246311 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://25f232e83add52308352a0c71839405001f25fe7657f02b1bf05e81be7c47a92" gracePeriod=600 Oct 14 14:55:30 crc kubenswrapper[4860]: I1014 14:55:30.319626 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="25f232e83add52308352a0c71839405001f25fe7657f02b1bf05e81be7c47a92" exitCode=0 Oct 14 14:55:30 crc kubenswrapper[4860]: I1014 14:55:30.319705 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"25f232e83add52308352a0c71839405001f25fe7657f02b1bf05e81be7c47a92"} Oct 14 14:55:30 crc kubenswrapper[4860]: I1014 14:55:30.320159 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"67a523d43812524378261b885891f72f29fa0d349cdcddee224ad39682f7b455"} Oct 14 14:55:30 crc kubenswrapper[4860]: I1014 14:55:30.320217 4860 scope.go:117] "RemoveContainer" containerID="5f02ad01ea4a3a58c910cbd208bf99bcffaa53f768f59bb77bff4a1200174a81" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.057920 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lm2zv"] Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.058951 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.102816 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lm2zv"] Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147761 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-bound-sa-token\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147815 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147851 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de13b431-10f7-4f24-9b40-204cfd3c4ab4-registry-certificates\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147869 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de13b431-10f7-4f24-9b40-204cfd3c4ab4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147925 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-registry-tls\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147958 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8zvq\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-kube-api-access-v8zvq\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.147972 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de13b431-10f7-4f24-9b40-204cfd3c4ab4-trusted-ca\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.148017 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de13b431-10f7-4f24-9b40-204cfd3c4ab4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.208818 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de13b431-10f7-4f24-9b40-204cfd3c4ab4-registry-certificates\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249208 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de13b431-10f7-4f24-9b40-204cfd3c4ab4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249252 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-registry-tls\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249278 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8zvq\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-kube-api-access-v8zvq\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249293 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de13b431-10f7-4f24-9b40-204cfd3c4ab4-trusted-ca\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249330 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de13b431-10f7-4f24-9b40-204cfd3c4ab4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.249362 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-bound-sa-token\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.251152 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de13b431-10f7-4f24-9b40-204cfd3c4ab4-registry-certificates\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.251476 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de13b431-10f7-4f24-9b40-204cfd3c4ab4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.253465 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de13b431-10f7-4f24-9b40-204cfd3c4ab4-trusted-ca\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.258080 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de13b431-10f7-4f24-9b40-204cfd3c4ab4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.267270 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-registry-tls\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.268279 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-bound-sa-token\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.278998 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8zvq\" (UniqueName: \"kubernetes.io/projected/de13b431-10f7-4f24-9b40-204cfd3c4ab4-kube-api-access-v8zvq\") pod \"image-registry-66df7c8f76-lm2zv\" (UID: \"de13b431-10f7-4f24-9b40-204cfd3c4ab4\") " pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.376339 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.612784 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lm2zv"] Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.859648 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" event={"ID":"de13b431-10f7-4f24-9b40-204cfd3c4ab4","Type":"ContainerStarted","Data":"fb45d1378b57ba7798273230b502e61270825a09629c477f19cec7e53ec75671"} Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.859691 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" event={"ID":"de13b431-10f7-4f24-9b40-204cfd3c4ab4","Type":"ContainerStarted","Data":"7db9e3b49e0fa3a1e57302150055f72d493e9f87f9952f058453c769ed91f202"} Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.860278 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:08 crc kubenswrapper[4860]: I1014 14:57:08.878452 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" podStartSLOduration=0.878437571 podStartE2EDuration="878.437571ms" podCreationTimestamp="2025-10-14 14:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:57:08.877756085 +0000 UTC m=+490.464539534" watchObservedRunningTime="2025-10-14 14:57:08.878437571 +0000 UTC m=+490.465221020" Oct 14 14:57:28 crc kubenswrapper[4860]: I1014 14:57:28.384437 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-lm2zv" Oct 14 14:57:28 crc kubenswrapper[4860]: I1014 14:57:28.437452 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-msfwt"] Oct 14 14:57:29 crc kubenswrapper[4860]: I1014 14:57:29.245898 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:57:29 crc kubenswrapper[4860]: I1014 14:57:29.246133 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.531517 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" podUID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" containerName="registry" containerID="cri-o://a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff" gracePeriod=30 Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.874518 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974360 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-ca-trust-extracted\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974405 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-installation-pull-secrets\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974451 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-bound-sa-token\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974482 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-trusted-ca\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974549 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-certificates\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974566 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-tls\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974587 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgnm4\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-kube-api-access-cgnm4\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.974731 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\" (UID: \"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78\") " Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.975355 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.975768 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.976142 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.980772 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.981492 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-kube-api-access-cgnm4" (OuterVolumeSpecName: "kube-api-access-cgnm4") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "kube-api-access-cgnm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.995742 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.995850 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:57:53 crc kubenswrapper[4860]: I1014 14:57:53.996279 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.005362 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" (UID: "c3beff9b-3e98-4d7d-88b0-bbe3271dcb78"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.077318 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.077369 4860 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.077388 4860 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.077403 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgnm4\" (UniqueName: \"kubernetes.io/projected/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-kube-api-access-cgnm4\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.077418 4860 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.077433 4860 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.132070 4860 generic.go:334] "Generic (PLEG): container finished" podID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" containerID="a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff" exitCode=0 Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.132126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" event={"ID":"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78","Type":"ContainerDied","Data":"a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff"} Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.132162 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" event={"ID":"c3beff9b-3e98-4d7d-88b0-bbe3271dcb78","Type":"ContainerDied","Data":"59ed0064670c981e075f3981cb3c759ac359a8914f00baa0c78da7238837cebf"} Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.132184 4860 scope.go:117] "RemoveContainer" containerID="a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.132320 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.156005 4860 scope.go:117] "RemoveContainer" containerID="a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff" Oct 14 14:57:54 crc kubenswrapper[4860]: E1014 14:57:54.156889 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff\": container with ID starting with a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff not found: ID does not exist" containerID="a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.156934 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff"} err="failed to get container status \"a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff\": rpc error: code = NotFound desc = could not find container \"a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff\": container with ID starting with a6430f1704c9c168f6369614108a40170facdf12793738fb9209af2b195cf7ff not found: ID does not exist" Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.167517 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-msfwt"] Oct 14 14:57:54 crc kubenswrapper[4860]: I1014 14:57:54.171524 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-msfwt"] Oct 14 14:57:55 crc kubenswrapper[4860]: I1014 14:57:55.070448 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" path="/var/lib/kubelet/pods/c3beff9b-3e98-4d7d-88b0-bbe3271dcb78/volumes" Oct 14 14:57:58 crc kubenswrapper[4860]: I1014 14:57:58.792488 4860 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-msfwt container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.20:5000/healthz\": dial tcp 10.217.0.20:5000: i/o timeout" start-of-body= Oct 14 14:57:58 crc kubenswrapper[4860]: I1014 14:57:58.792793 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-msfwt" podUID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.20:5000/healthz\": dial tcp 10.217.0.20:5000: i/o timeout" Oct 14 14:57:59 crc kubenswrapper[4860]: I1014 14:57:59.246754 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:57:59 crc kubenswrapper[4860]: I1014 14:57:59.246829 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:58:29 crc kubenswrapper[4860]: I1014 14:58:29.246064 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:58:29 crc kubenswrapper[4860]: I1014 14:58:29.246603 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:58:29 crc kubenswrapper[4860]: I1014 14:58:29.246648 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 14:58:29 crc kubenswrapper[4860]: I1014 14:58:29.247182 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67a523d43812524378261b885891f72f29fa0d349cdcddee224ad39682f7b455"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:58:29 crc kubenswrapper[4860]: I1014 14:58:29.247232 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://67a523d43812524378261b885891f72f29fa0d349cdcddee224ad39682f7b455" gracePeriod=600 Oct 14 14:58:30 crc kubenswrapper[4860]: I1014 14:58:30.328691 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="67a523d43812524378261b885891f72f29fa0d349cdcddee224ad39682f7b455" exitCode=0 Oct 14 14:58:30 crc kubenswrapper[4860]: I1014 14:58:30.328767 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"67a523d43812524378261b885891f72f29fa0d349cdcddee224ad39682f7b455"} Oct 14 14:58:30 crc kubenswrapper[4860]: I1014 14:58:30.329223 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"966bd2ec6b906257cac7c7ee826b7b876455d65da8f5b51b82ca36af7678fd4f"} Oct 14 14:58:30 crc kubenswrapper[4860]: I1014 14:58:30.329245 4860 scope.go:117] "RemoveContainer" containerID="25f232e83add52308352a0c71839405001f25fe7657f02b1bf05e81be7c47a92" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.138945 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7"] Oct 14 15:00:00 crc kubenswrapper[4860]: E1014 15:00:00.139646 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" containerName="registry" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.139657 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" containerName="registry" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.139755 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3beff9b-3e98-4d7d-88b0-bbe3271dcb78" containerName="registry" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.140139 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.141565 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7"] Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.142963 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.143321 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.175632 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2q87\" (UniqueName: \"kubernetes.io/projected/407016dc-637e-487c-ba77-86b2f4752266-kube-api-access-d2q87\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.175699 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407016dc-637e-487c-ba77-86b2f4752266-config-volume\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.175718 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/407016dc-637e-487c-ba77-86b2f4752266-secret-volume\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.277440 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407016dc-637e-487c-ba77-86b2f4752266-config-volume\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.277494 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/407016dc-637e-487c-ba77-86b2f4752266-secret-volume\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.277552 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2q87\" (UniqueName: \"kubernetes.io/projected/407016dc-637e-487c-ba77-86b2f4752266-kube-api-access-d2q87\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.278505 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407016dc-637e-487c-ba77-86b2f4752266-config-volume\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.283803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/407016dc-637e-487c-ba77-86b2f4752266-secret-volume\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.296683 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2q87\" (UniqueName: \"kubernetes.io/projected/407016dc-637e-487c-ba77-86b2f4752266-kube-api-access-d2q87\") pod \"collect-profiles-29340900-rskw7\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.458533 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.650976 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7"] Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.816906 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" event={"ID":"407016dc-637e-487c-ba77-86b2f4752266","Type":"ContainerStarted","Data":"cad0eb55cd201b2a956a9f4753a524f95c0e2329b86e9f96ea4e57f39def2bb5"} Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.817297 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" event={"ID":"407016dc-637e-487c-ba77-86b2f4752266","Type":"ContainerStarted","Data":"4667db29788794900eca6d52cf81526df11c2917a0a99a06825da76a7262115a"} Oct 14 15:00:00 crc kubenswrapper[4860]: I1014 15:00:00.835004 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" podStartSLOduration=0.83498241 podStartE2EDuration="834.98241ms" podCreationTimestamp="2025-10-14 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:00:00.832453459 +0000 UTC m=+662.419236908" watchObservedRunningTime="2025-10-14 15:00:00.83498241 +0000 UTC m=+662.421765859" Oct 14 15:00:01 crc kubenswrapper[4860]: I1014 15:00:01.822815 4860 generic.go:334] "Generic (PLEG): container finished" podID="407016dc-637e-487c-ba77-86b2f4752266" containerID="cad0eb55cd201b2a956a9f4753a524f95c0e2329b86e9f96ea4e57f39def2bb5" exitCode=0 Oct 14 15:00:01 crc kubenswrapper[4860]: I1014 15:00:01.822860 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" event={"ID":"407016dc-637e-487c-ba77-86b2f4752266","Type":"ContainerDied","Data":"cad0eb55cd201b2a956a9f4753a524f95c0e2329b86e9f96ea4e57f39def2bb5"} Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.035008 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.211926 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/407016dc-637e-487c-ba77-86b2f4752266-secret-volume\") pod \"407016dc-637e-487c-ba77-86b2f4752266\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.211973 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407016dc-637e-487c-ba77-86b2f4752266-config-volume\") pod \"407016dc-637e-487c-ba77-86b2f4752266\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.212059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2q87\" (UniqueName: \"kubernetes.io/projected/407016dc-637e-487c-ba77-86b2f4752266-kube-api-access-d2q87\") pod \"407016dc-637e-487c-ba77-86b2f4752266\" (UID: \"407016dc-637e-487c-ba77-86b2f4752266\") " Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.213137 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407016dc-637e-487c-ba77-86b2f4752266-config-volume" (OuterVolumeSpecName: "config-volume") pod "407016dc-637e-487c-ba77-86b2f4752266" (UID: "407016dc-637e-487c-ba77-86b2f4752266"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.218199 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407016dc-637e-487c-ba77-86b2f4752266-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "407016dc-637e-487c-ba77-86b2f4752266" (UID: "407016dc-637e-487c-ba77-86b2f4752266"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.223677 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407016dc-637e-487c-ba77-86b2f4752266-kube-api-access-d2q87" (OuterVolumeSpecName: "kube-api-access-d2q87") pod "407016dc-637e-487c-ba77-86b2f4752266" (UID: "407016dc-637e-487c-ba77-86b2f4752266"). InnerVolumeSpecName "kube-api-access-d2q87". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.313755 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/407016dc-637e-487c-ba77-86b2f4752266-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.313788 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407016dc-637e-487c-ba77-86b2f4752266-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.313801 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2q87\" (UniqueName: \"kubernetes.io/projected/407016dc-637e-487c-ba77-86b2f4752266-kube-api-access-d2q87\") on node \"crc\" DevicePath \"\"" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.832936 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" event={"ID":"407016dc-637e-487c-ba77-86b2f4752266","Type":"ContainerDied","Data":"4667db29788794900eca6d52cf81526df11c2917a0a99a06825da76a7262115a"} Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.832969 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4667db29788794900eca6d52cf81526df11c2917a0a99a06825da76a7262115a" Oct 14 15:00:03 crc kubenswrapper[4860]: I1014 15:00:03.833020 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7" Oct 14 15:00:29 crc kubenswrapper[4860]: I1014 15:00:29.245564 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:00:29 crc kubenswrapper[4860]: I1014 15:00:29.246179 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:00:59 crc kubenswrapper[4860]: I1014 15:00:59.245596 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:00:59 crc kubenswrapper[4860]: I1014 15:00:59.247142 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:01:28 crc kubenswrapper[4860]: I1014 15:01:28.502816 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m84ss"] Oct 14 15:01:28 crc kubenswrapper[4860]: I1014 15:01:28.503620 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" containerID="cri-o://43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77" gracePeriod=30 Oct 14 15:01:28 crc kubenswrapper[4860]: I1014 15:01:28.615213 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s"] Oct 14 15:01:28 crc kubenswrapper[4860]: I1014 15:01:28.615406 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" containerID="cri-o://8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323" gracePeriod=30 Oct 14 15:01:28 crc kubenswrapper[4860]: I1014 15:01:28.870408 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 15:01:28 crc kubenswrapper[4860]: I1014 15:01:28.943171 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057512 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhcgp\" (UniqueName: \"kubernetes.io/projected/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-kube-api-access-mhcgp\") pod \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057601 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-client-ca\") pod \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057629 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-client-ca\") pod \"bdb25ff1-18af-4f95-a3e7-09472726d3df\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057647 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-config\") pod \"bdb25ff1-18af-4f95-a3e7-09472726d3df\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057683 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-config\") pod \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057697 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-proxy-ca-bundles\") pod \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057723 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtpnw\" (UniqueName: \"kubernetes.io/projected/bdb25ff1-18af-4f95-a3e7-09472726d3df-kube-api-access-wtpnw\") pod \"bdb25ff1-18af-4f95-a3e7-09472726d3df\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057744 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb25ff1-18af-4f95-a3e7-09472726d3df-serving-cert\") pod \"bdb25ff1-18af-4f95-a3e7-09472726d3df\" (UID: \"bdb25ff1-18af-4f95-a3e7-09472726d3df\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.057758 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-serving-cert\") pod \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\" (UID: \"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106\") " Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.059362 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdb25ff1-18af-4f95-a3e7-09472726d3df" (UID: "bdb25ff1-18af-4f95-a3e7-09472726d3df"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.059410 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-config" (OuterVolumeSpecName: "config") pod "bdb25ff1-18af-4f95-a3e7-09472726d3df" (UID: "bdb25ff1-18af-4f95-a3e7-09472726d3df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.059592 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" (UID: "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.059980 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-config" (OuterVolumeSpecName: "config") pod "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" (UID: "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.060259 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" (UID: "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.063244 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb25ff1-18af-4f95-a3e7-09472726d3df-kube-api-access-wtpnw" (OuterVolumeSpecName: "kube-api-access-wtpnw") pod "bdb25ff1-18af-4f95-a3e7-09472726d3df" (UID: "bdb25ff1-18af-4f95-a3e7-09472726d3df"). InnerVolumeSpecName "kube-api-access-wtpnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.077172 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" (UID: "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.077608 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-kube-api-access-mhcgp" (OuterVolumeSpecName: "kube-api-access-mhcgp") pod "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" (UID: "ca4179d4-5b4c-4b52-be97-9a0e9aa8c106"). InnerVolumeSpecName "kube-api-access-mhcgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.088270 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb25ff1-18af-4f95-a3e7-09472726d3df-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdb25ff1-18af-4f95-a3e7-09472726d3df" (UID: "bdb25ff1-18af-4f95-a3e7-09472726d3df"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159253 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159299 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159308 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb25ff1-18af-4f95-a3e7-09472726d3df-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159359 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159370 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159379 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtpnw\" (UniqueName: \"kubernetes.io/projected/bdb25ff1-18af-4f95-a3e7-09472726d3df-kube-api-access-wtpnw\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159388 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159396 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb25ff1-18af-4f95-a3e7-09472726d3df-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.159403 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhcgp\" (UniqueName: \"kubernetes.io/projected/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106-kube-api-access-mhcgp\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.245760 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.245817 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.245857 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.246537 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"966bd2ec6b906257cac7c7ee826b7b876455d65da8f5b51b82ca36af7678fd4f"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.246594 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://966bd2ec6b906257cac7c7ee826b7b876455d65da8f5b51b82ca36af7678fd4f" gracePeriod=600 Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.294315 4860 generic.go:334] "Generic (PLEG): container finished" podID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerID="8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323" exitCode=0 Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.294376 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.294402 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" event={"ID":"bdb25ff1-18af-4f95-a3e7-09472726d3df","Type":"ContainerDied","Data":"8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323"} Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.294476 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" event={"ID":"bdb25ff1-18af-4f95-a3e7-09472726d3df","Type":"ContainerDied","Data":"f85af2f1d3b9c6ec45960bd9fe68d5d743a7aeef5a35c86982c22ef39ff02d88"} Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.294499 4860 scope.go:117] "RemoveContainer" containerID="8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.300813 4860 generic.go:334] "Generic (PLEG): container finished" podID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerID="43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77" exitCode=0 Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.300854 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" event={"ID":"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106","Type":"ContainerDied","Data":"43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77"} Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.300877 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" event={"ID":"ca4179d4-5b4c-4b52-be97-9a0e9aa8c106","Type":"ContainerDied","Data":"e27de943b243fef1d322f58c96c421bc80c71eb5362bf6bc36f93cc6aea53d64"} Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.300886 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m84ss" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.320828 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m84ss"] Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.326691 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m84ss"] Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.332323 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s"] Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.332713 4860 scope.go:117] "RemoveContainer" containerID="8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323" Oct 14 15:01:29 crc kubenswrapper[4860]: E1014 15:01:29.333689 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323\": container with ID starting with 8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323 not found: ID does not exist" containerID="8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.333769 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323"} err="failed to get container status \"8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323\": rpc error: code = NotFound desc = could not find container \"8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323\": container with ID starting with 8735501bf10727b6612ca4daf1edbff9c867540a3fd741903e813ab5b7c88323 not found: ID does not exist" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.335400 4860 scope.go:117] "RemoveContainer" containerID="43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.340260 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s"] Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.355983 4860 scope.go:117] "RemoveContainer" containerID="43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77" Oct 14 15:01:29 crc kubenswrapper[4860]: E1014 15:01:29.357231 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77\": container with ID starting with 43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77 not found: ID does not exist" containerID="43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.357266 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77"} err="failed to get container status \"43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77\": rpc error: code = NotFound desc = could not find container \"43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77\": container with ID starting with 43d49602abbfe6c1036f1ebb90f01d1943f78c77b63fba677b34ea280b214d77 not found: ID does not exist" Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.905789 4860 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2dz4s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 15:01:29 crc kubenswrapper[4860]: I1014 15:01:29.907131 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2dz4s" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014138 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97"] Oct 14 15:01:30 crc kubenswrapper[4860]: E1014 15:01:30.014513 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014536 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" Oct 14 15:01:30 crc kubenswrapper[4860]: E1014 15:01:30.014552 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014561 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" Oct 14 15:01:30 crc kubenswrapper[4860]: E1014 15:01:30.014580 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407016dc-637e-487c-ba77-86b2f4752266" containerName="collect-profiles" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014591 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="407016dc-637e-487c-ba77-86b2f4752266" containerName="collect-profiles" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014703 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="407016dc-637e-487c-ba77-86b2f4752266" containerName="collect-profiles" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014721 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" containerName="controller-manager" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.014731 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" containerName="route-controller-manager" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.015204 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.017366 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.018017 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.019297 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.021280 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc"] Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.022365 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.025958 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.026415 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.026558 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.026916 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.026921 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.027317 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.027398 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.027861 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.029772 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.034706 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.049145 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97"] Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.051904 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc"] Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272644 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-config\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272697 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-proxy-ca-bundles\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272722 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh6f\" (UniqueName: \"kubernetes.io/projected/fd6ff5f6-6417-4957-a382-89378c84071d-kube-api-access-2vh6f\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272757 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmwp\" (UniqueName: \"kubernetes.io/projected/34454c71-075b-4bdb-b961-a7fb3c5d4eea-kube-api-access-7dmwp\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272831 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6ff5f6-6417-4957-a382-89378c84071d-serving-cert\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272860 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34454c71-075b-4bdb-b961-a7fb3c5d4eea-serving-cert\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272885 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-client-ca\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272903 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd6ff5f6-6417-4957-a382-89378c84071d-config\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.272923 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd6ff5f6-6417-4957-a382-89378c84071d-client-ca\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.308918 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="966bd2ec6b906257cac7c7ee826b7b876455d65da8f5b51b82ca36af7678fd4f" exitCode=0 Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.308954 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"966bd2ec6b906257cac7c7ee826b7b876455d65da8f5b51b82ca36af7678fd4f"} Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.309002 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"35f60ae25f79186f53f554e65dfb897f3e59fbee448cf25d36669e90dcf31a8b"} Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.309063 4860 scope.go:117] "RemoveContainer" containerID="67a523d43812524378261b885891f72f29fa0d349cdcddee224ad39682f7b455" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-config\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374734 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-proxy-ca-bundles\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374760 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh6f\" (UniqueName: \"kubernetes.io/projected/fd6ff5f6-6417-4957-a382-89378c84071d-kube-api-access-2vh6f\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374794 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dmwp\" (UniqueName: \"kubernetes.io/projected/34454c71-075b-4bdb-b961-a7fb3c5d4eea-kube-api-access-7dmwp\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374828 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6ff5f6-6417-4957-a382-89378c84071d-serving-cert\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374849 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34454c71-075b-4bdb-b961-a7fb3c5d4eea-serving-cert\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374875 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-client-ca\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374896 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd6ff5f6-6417-4957-a382-89378c84071d-config\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.374918 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd6ff5f6-6417-4957-a382-89378c84071d-client-ca\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.375816 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd6ff5f6-6417-4957-a382-89378c84071d-client-ca\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.377676 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-client-ca\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.377912 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-proxy-ca-bundles\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.377999 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd6ff5f6-6417-4957-a382-89378c84071d-config\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.378568 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34454c71-075b-4bdb-b961-a7fb3c5d4eea-config\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.381825 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34454c71-075b-4bdb-b961-a7fb3c5d4eea-serving-cert\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.387406 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6ff5f6-6417-4957-a382-89378c84071d-serving-cert\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.398460 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh6f\" (UniqueName: \"kubernetes.io/projected/fd6ff5f6-6417-4957-a382-89378c84071d-kube-api-access-2vh6f\") pod \"route-controller-manager-7d4fb8b4bd-b47xc\" (UID: \"fd6ff5f6-6417-4957-a382-89378c84071d\") " pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.400387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dmwp\" (UniqueName: \"kubernetes.io/projected/34454c71-075b-4bdb-b961-a7fb3c5d4eea-kube-api-access-7dmwp\") pod \"controller-manager-59ffc6bd4f-lcr97\" (UID: \"34454c71-075b-4bdb-b961-a7fb3c5d4eea\") " pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.633292 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.642997 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.868709 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97"] Oct 14 15:01:30 crc kubenswrapper[4860]: I1014 15:01:30.935681 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc"] Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.068504 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb25ff1-18af-4f95-a3e7-09472726d3df" path="/var/lib/kubelet/pods/bdb25ff1-18af-4f95-a3e7-09472726d3df/volumes" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.069227 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4179d4-5b4c-4b52-be97-9a0e9aa8c106" path="/var/lib/kubelet/pods/ca4179d4-5b4c-4b52-be97-9a0e9aa8c106/volumes" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.318517 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" event={"ID":"34454c71-075b-4bdb-b961-a7fb3c5d4eea","Type":"ContainerStarted","Data":"dae196379b0402fff65f8803926bad5dd909eda36c150016bae134921bd3ceae"} Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.318998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" event={"ID":"34454c71-075b-4bdb-b961-a7fb3c5d4eea","Type":"ContainerStarted","Data":"73894e362d13426c26b88e97e2f6c727c6afee0a61d452e27b42c2a9343e6508"} Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.319088 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.319790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" event={"ID":"fd6ff5f6-6417-4957-a382-89378c84071d","Type":"ContainerStarted","Data":"43339a7fba557e7f74b17fcb810677a5a0a0ad3ddbceb12bb47acf4a8ba51477"} Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.319820 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" event={"ID":"fd6ff5f6-6417-4957-a382-89378c84071d","Type":"ContainerStarted","Data":"18e286905879f1271e322c27fb0bc8646be59aa1f3deb13f6625df5c82f35cc6"} Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.319972 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.347042 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" podStartSLOduration=3.347002767 podStartE2EDuration="3.347002767s" podCreationTimestamp="2025-10-14 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:01:31.339139445 +0000 UTC m=+752.925922894" watchObservedRunningTime="2025-10-14 15:01:31.347002767 +0000 UTC m=+752.933786216" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.352448 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59ffc6bd4f-lcr97" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.398701 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" podStartSLOduration=3.398682816 podStartE2EDuration="3.398682816s" podCreationTimestamp="2025-10-14 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:01:31.370884789 +0000 UTC m=+752.957668238" watchObservedRunningTime="2025-10-14 15:01:31.398682816 +0000 UTC m=+752.985466265" Oct 14 15:01:31 crc kubenswrapper[4860]: I1014 15:01:31.506980 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" Oct 14 15:01:38 crc kubenswrapper[4860]: I1014 15:01:38.135822 4860 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.210556 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-z626q"] Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.211370 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" Oct 14 15:01:39 crc kubenswrapper[4860]: W1014 15:01:39.213725 4860 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-vvhmj": failed to list *v1.Secret: secrets "cert-manager-cainjector-dockercfg-vvhmj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Oct 14 15:01:39 crc kubenswrapper[4860]: E1014 15:01:39.213763 4860 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-vvhmj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-cainjector-dockercfg-vvhmj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:01:39 crc kubenswrapper[4860]: W1014 15:01:39.213938 4860 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Oct 14 15:01:39 crc kubenswrapper[4860]: E1014 15:01:39.213999 4860 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:01:39 crc kubenswrapper[4860]: W1014 15:01:39.214780 4860 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Oct 14 15:01:39 crc kubenswrapper[4860]: E1014 15:01:39.214828 4860 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.227437 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d96mc"] Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.228136 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d96mc" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.229579 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-z59ps" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.231439 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-z626q"] Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.237182 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d96mc"] Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.273433 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-xjn4j"] Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.274311 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.279663 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-v7zzh" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.288699 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-xjn4j"] Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.310688 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmb2\" (UniqueName: \"kubernetes.io/projected/d1972274-e4e4-4910-b996-98f16f66de5e-kube-api-access-mwmb2\") pod \"cert-manager-cainjector-7f985d654d-z626q\" (UID: \"d1972274-e4e4-4910-b996-98f16f66de5e\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.310787 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dndg\" (UniqueName: \"kubernetes.io/projected/ce7fd78e-7ed7-450e-bca7-ca9075b12a25-kube-api-access-5dndg\") pod \"cert-manager-5b446d88c5-d96mc\" (UID: \"ce7fd78e-7ed7-450e-bca7-ca9075b12a25\") " pod="cert-manager/cert-manager-5b446d88c5-d96mc" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.310880 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7gl5\" (UniqueName: \"kubernetes.io/projected/bbaa104e-a070-4f3d-8807-959b551312b9-kube-api-access-x7gl5\") pod \"cert-manager-webhook-5655c58dd6-xjn4j\" (UID: \"bbaa104e-a070-4f3d-8807-959b551312b9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.411771 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwmb2\" (UniqueName: \"kubernetes.io/projected/d1972274-e4e4-4910-b996-98f16f66de5e-kube-api-access-mwmb2\") pod \"cert-manager-cainjector-7f985d654d-z626q\" (UID: \"d1972274-e4e4-4910-b996-98f16f66de5e\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.411839 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dndg\" (UniqueName: \"kubernetes.io/projected/ce7fd78e-7ed7-450e-bca7-ca9075b12a25-kube-api-access-5dndg\") pod \"cert-manager-5b446d88c5-d96mc\" (UID: \"ce7fd78e-7ed7-450e-bca7-ca9075b12a25\") " pod="cert-manager/cert-manager-5b446d88c5-d96mc" Oct 14 15:01:39 crc kubenswrapper[4860]: I1014 15:01:39.411889 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7gl5\" (UniqueName: \"kubernetes.io/projected/bbaa104e-a070-4f3d-8807-959b551312b9-kube-api-access-x7gl5\") pod \"cert-manager-webhook-5655c58dd6-xjn4j\" (UID: \"bbaa104e-a070-4f3d-8807-959b551312b9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.065505 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.331769 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.340769 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7gl5\" (UniqueName: \"kubernetes.io/projected/bbaa104e-a070-4f3d-8807-959b551312b9-kube-api-access-x7gl5\") pod \"cert-manager-webhook-5655c58dd6-xjn4j\" (UID: \"bbaa104e-a070-4f3d-8807-959b551312b9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.343078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dndg\" (UniqueName: \"kubernetes.io/projected/ce7fd78e-7ed7-450e-bca7-ca9075b12a25-kube-api-access-5dndg\") pod \"cert-manager-5b446d88c5-d96mc\" (UID: \"ce7fd78e-7ed7-450e-bca7-ca9075b12a25\") " pod="cert-manager/cert-manager-5b446d88c5-d96mc" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.343866 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwmb2\" (UniqueName: \"kubernetes.io/projected/d1972274-e4e4-4910-b996-98f16f66de5e-kube-api-access-mwmb2\") pod \"cert-manager-cainjector-7f985d654d-z626q\" (UID: \"d1972274-e4e4-4910-b996-98f16f66de5e\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.447679 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d96mc" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.490324 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.769918 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vvhmj" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.777345 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.884980 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d96mc"] Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.899836 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:01:40 crc kubenswrapper[4860]: I1014 15:01:40.947728 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-xjn4j"] Oct 14 15:01:40 crc kubenswrapper[4860]: W1014 15:01:40.957288 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbaa104e_a070_4f3d_8807_959b551312b9.slice/crio-3682ef025a6546dd5622b0d145c9159819dd3141100e488cedc5327a1e6419b0 WatchSource:0}: Error finding container 3682ef025a6546dd5622b0d145c9159819dd3141100e488cedc5327a1e6419b0: Status 404 returned error can't find the container with id 3682ef025a6546dd5622b0d145c9159819dd3141100e488cedc5327a1e6419b0 Oct 14 15:01:41 crc kubenswrapper[4860]: I1014 15:01:41.181545 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-z626q"] Oct 14 15:01:41 crc kubenswrapper[4860]: W1014 15:01:41.188476 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1972274_e4e4_4910_b996_98f16f66de5e.slice/crio-be47187af5d7527b24a4fdd114db29262c7333957f2e208e042e1ab4c6be1225 WatchSource:0}: Error finding container be47187af5d7527b24a4fdd114db29262c7333957f2e208e042e1ab4c6be1225: Status 404 returned error can't find the container with id be47187af5d7527b24a4fdd114db29262c7333957f2e208e042e1ab4c6be1225 Oct 14 15:01:41 crc kubenswrapper[4860]: I1014 15:01:41.413634 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" event={"ID":"bbaa104e-a070-4f3d-8807-959b551312b9","Type":"ContainerStarted","Data":"3682ef025a6546dd5622b0d145c9159819dd3141100e488cedc5327a1e6419b0"} Oct 14 15:01:41 crc kubenswrapper[4860]: I1014 15:01:41.414401 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" event={"ID":"d1972274-e4e4-4910-b996-98f16f66de5e","Type":"ContainerStarted","Data":"be47187af5d7527b24a4fdd114db29262c7333957f2e208e042e1ab4c6be1225"} Oct 14 15:01:41 crc kubenswrapper[4860]: I1014 15:01:41.415105 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d96mc" event={"ID":"ce7fd78e-7ed7-450e-bca7-ca9075b12a25","Type":"ContainerStarted","Data":"a1becdeba40ecfb757f08dd6ad7d5e56e58d911ca75c6b3672e75114bfeaaa96"} Oct 14 15:01:44 crc kubenswrapper[4860]: I1014 15:01:44.430630 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" event={"ID":"bbaa104e-a070-4f3d-8807-959b551312b9","Type":"ContainerStarted","Data":"f084ba68861b46fbecb65df3a96870be9a62b548d781ae5ff2bd1431192385d8"} Oct 14 15:01:44 crc kubenswrapper[4860]: I1014 15:01:44.431184 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:44 crc kubenswrapper[4860]: I1014 15:01:44.435861 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d96mc" event={"ID":"ce7fd78e-7ed7-450e-bca7-ca9075b12a25","Type":"ContainerStarted","Data":"2f63032c0fa0806a8c463fc80e00096b4bf8da6cdddc267ff09d2c6770342b8c"} Oct 14 15:01:44 crc kubenswrapper[4860]: I1014 15:01:44.457658 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" podStartSLOduration=2.514605856 podStartE2EDuration="5.457594851s" podCreationTimestamp="2025-10-14 15:01:39 +0000 UTC" firstStartedPulling="2025-10-14 15:01:40.959476095 +0000 UTC m=+762.546259544" lastFinishedPulling="2025-10-14 15:01:43.90246509 +0000 UTC m=+765.489248539" observedRunningTime="2025-10-14 15:01:44.446925051 +0000 UTC m=+766.033708510" watchObservedRunningTime="2025-10-14 15:01:44.457594851 +0000 UTC m=+766.044378320" Oct 14 15:01:44 crc kubenswrapper[4860]: I1014 15:01:44.463894 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-d96mc" podStartSLOduration=2.409947515 podStartE2EDuration="5.463874133s" podCreationTimestamp="2025-10-14 15:01:39 +0000 UTC" firstStartedPulling="2025-10-14 15:01:40.899633127 +0000 UTC m=+762.486416576" lastFinishedPulling="2025-10-14 15:01:43.953559745 +0000 UTC m=+765.540343194" observedRunningTime="2025-10-14 15:01:44.460170883 +0000 UTC m=+766.046954352" watchObservedRunningTime="2025-10-14 15:01:44.463874133 +0000 UTC m=+766.050657582" Oct 14 15:01:45 crc kubenswrapper[4860]: I1014 15:01:45.442728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" event={"ID":"d1972274-e4e4-4910-b996-98f16f66de5e","Type":"ContainerStarted","Data":"4c626e0a1c9414e2dfacba87afe20fa12c9b9113d848f704cfc1814f6efca875"} Oct 14 15:01:45 crc kubenswrapper[4860]: I1014 15:01:45.479503 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-z626q" podStartSLOduration=2.5721056239999998 podStartE2EDuration="6.479484728s" podCreationTimestamp="2025-10-14 15:01:39 +0000 UTC" firstStartedPulling="2025-10-14 15:01:41.190542772 +0000 UTC m=+762.777326221" lastFinishedPulling="2025-10-14 15:01:45.097921876 +0000 UTC m=+766.684705325" observedRunningTime="2025-10-14 15:01:45.465922448 +0000 UTC m=+767.052705897" watchObservedRunningTime="2025-10-14 15:01:45.479484728 +0000 UTC m=+767.066268177" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.468346 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mdvx2"] Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469220 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-controller" containerID="cri-o://b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469298 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="northd" containerID="cri-o://522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469329 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-acl-logging" containerID="cri-o://ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469362 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="sbdb" containerID="cri-o://2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469506 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469292 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="nbdb" containerID="cri-o://1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.469623 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-node" containerID="cri-o://ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.513673 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" containerID="cri-o://75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" gracePeriod=30 Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.806977 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/3.log" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.809546 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovn-acl-logging/0.log" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.809989 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovn-controller/0.log" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.810387 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.868590 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qdm4s"] Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.868879 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.868895 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.868957 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.868966 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.868981 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="nbdb" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.868988 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="nbdb" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869004 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869009 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869017 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="sbdb" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869039 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="sbdb" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869052 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869060 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869069 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-node" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869076 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-node" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869088 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kubecfg-setup" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869095 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kubecfg-setup" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869105 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869111 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869118 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="northd" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.869125 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="northd" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.869137 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-acl-logging" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.871909 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-acl-logging" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.871918 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.871924 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872047 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872058 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872065 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872073 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="northd" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872080 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872087 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872094 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="kube-rbac-proxy-node" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872102 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="nbdb" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872109 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872117 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="sbdb" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872126 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovn-acl-logging" Oct 14 15:01:49 crc kubenswrapper[4860]: E1014 15:01:49.872212 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872219 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.872367 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerName="ovnkube-controller" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.873845 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948089 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-config\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948178 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-systemd-units\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948207 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-env-overrides\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948237 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-systemd\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948256 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-ovn-kubernetes\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948284 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-ovn\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948315 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-bin\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948342 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovn-node-metrics-cert\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948369 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-node-log\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948401 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-etc-openvswitch\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948420 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-log-socket\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948441 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-netd\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948460 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948482 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-netns\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948502 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-slash\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948520 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-var-lib-openvswitch\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948544 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-openvswitch\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948561 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-kubelet\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948585 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg7wr\" (UniqueName: \"kubernetes.io/projected/87a92ec1-e2b0-407d-990e-ce52a980b64b-kube-api-access-cg7wr\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948607 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-script-lib\") pod \"87a92ec1-e2b0-407d-990e-ce52a980b64b\" (UID: \"87a92ec1-e2b0-407d-990e-ce52a980b64b\") " Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948611 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948654 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948677 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-kubelet\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948772 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-slash\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948793 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-systemd\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948823 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-env-overrides\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948850 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-etc-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948873 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948894 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-cni-netd\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948909 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948920 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-ovnkube-script-lib\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948949 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-ovn\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.948979 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-var-lib-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949005 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-log-socket\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949085 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-ovnkube-config\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949104 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-run-netns\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949126 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-systemd-units\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949149 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hffsp\" (UniqueName: \"kubernetes.io/projected/871569e0-a7c2-4890-b131-c4e9a2e43227-kube-api-access-hffsp\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949168 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-node-log\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-cni-bin\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949218 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/871569e0-a7c2-4890-b131-c4e9a2e43227-ovn-node-metrics-cert\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949239 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-run-ovn-kubernetes\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949288 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949336 4860 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949350 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949365 4860 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949408 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-log-socket" (OuterVolumeSpecName: "log-socket") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949432 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949483 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949513 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949533 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-slash" (OuterVolumeSpecName: "host-slash") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949605 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949629 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949677 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949779 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.949835 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.951543 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.951724 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-node-log" (OuterVolumeSpecName: "node-log") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.954192 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a92ec1-e2b0-407d-990e-ce52a980b64b-kube-api-access-cg7wr" (OuterVolumeSpecName: "kube-api-access-cg7wr") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "kube-api-access-cg7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.955155 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:01:49 crc kubenswrapper[4860]: I1014 15:01:49.961703 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "87a92ec1-e2b0-407d-990e-ce52a980b64b" (UID: "87a92ec1-e2b0-407d-990e-ce52a980b64b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.050928 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-ovnkube-script-lib\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051020 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-ovn\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-var-lib-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051116 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-log-socket\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-run-netns\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051157 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-ovn\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051206 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-var-lib-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051245 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051275 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-log-socket\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-run-netns\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051162 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-ovnkube-config\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051349 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-systemd-units\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051371 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hffsp\" (UniqueName: \"kubernetes.io/projected/871569e0-a7c2-4890-b131-c4e9a2e43227-kube-api-access-hffsp\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051387 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-node-log\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051411 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-cni-bin\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051432 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/871569e0-a7c2-4890-b131-c4e9a2e43227-ovn-node-metrics-cert\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051453 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-run-ovn-kubernetes\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051469 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-kubelet\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051484 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-slash\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051498 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-systemd\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051520 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-env-overrides\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051535 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-etc-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051550 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051566 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-cni-netd\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051602 4860 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051613 4860 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051622 4860 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051630 4860 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051639 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051648 4860 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-node-log\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051656 4860 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-log-socket\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051664 4860 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051673 4860 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051683 4860 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051691 4860 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-slash\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051699 4860 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051707 4860 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051715 4860 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a92ec1-e2b0-407d-990e-ce52a980b64b-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051723 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg7wr\" (UniqueName: \"kubernetes.io/projected/87a92ec1-e2b0-407d-990e-ce52a980b64b-kube-api-access-cg7wr\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051733 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a92ec1-e2b0-407d-990e-ce52a980b64b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051755 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-cni-netd\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-systemd-units\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051816 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-ovnkube-config\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051853 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-kubelet\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-node-log\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.051892 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-cni-bin\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052045 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-slash\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052074 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-systemd\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052478 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-env-overrides\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052520 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-etc-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-run-openvswitch\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052583 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/871569e0-a7c2-4890-b131-c4e9a2e43227-host-run-ovn-kubernetes\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.052621 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/871569e0-a7c2-4890-b131-c4e9a2e43227-ovnkube-script-lib\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.056634 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/871569e0-a7c2-4890-b131-c4e9a2e43227-ovn-node-metrics-cert\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.068661 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hffsp\" (UniqueName: \"kubernetes.io/projected/871569e0-a7c2-4890-b131-c4e9a2e43227-kube-api-access-hffsp\") pod \"ovnkube-node-qdm4s\" (UID: \"871569e0-a7c2-4890-b131-c4e9a2e43227\") " pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.191888 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:50 crc kubenswrapper[4860]: W1014 15:01:50.213262 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod871569e0_a7c2_4890_b131_c4e9a2e43227.slice/crio-6da3112ce5cd4cb20bfc0e8f8bb4d5e28f70ce5da09c31947fd6edc4fb1eb135 WatchSource:0}: Error finding container 6da3112ce5cd4cb20bfc0e8f8bb4d5e28f70ce5da09c31947fd6edc4fb1eb135: Status 404 returned error can't find the container with id 6da3112ce5cd4cb20bfc0e8f8bb4d5e28f70ce5da09c31947fd6edc4fb1eb135 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.473673 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/2.log" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.473982 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/1.log" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.474009 4860 generic.go:334] "Generic (PLEG): container finished" podID="ceb09eae-57c9-4a8e-95d5-aa40e49f7316" containerID="e4348dbcafb0a136a8778e5f340f7e1294d56b5f49a540dcf5c355211a7a4501" exitCode=2 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.474067 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerDied","Data":"e4348dbcafb0a136a8778e5f340f7e1294d56b5f49a540dcf5c355211a7a4501"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.474099 4860 scope.go:117] "RemoveContainer" containerID="4dd2467d8c6acdf7e08b9eab1c254d5a14134e125433a9b40b8eb6dc66cbe4ab" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.474510 4860 scope.go:117] "RemoveContainer" containerID="e4348dbcafb0a136a8778e5f340f7e1294d56b5f49a540dcf5c355211a7a4501" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.477734 4860 generic.go:334] "Generic (PLEG): container finished" podID="871569e0-a7c2-4890-b131-c4e9a2e43227" containerID="e8e3565f90ad40fb3f1e225199504d37eee79b5466d2b2a3edf49ef4ad28c180" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.477868 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerDied","Data":"e8e3565f90ad40fb3f1e225199504d37eee79b5466d2b2a3edf49ef4ad28c180"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.477906 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"6da3112ce5cd4cb20bfc0e8f8bb4d5e28f70ce5da09c31947fd6edc4fb1eb135"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.483113 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovnkube-controller/3.log" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.485919 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovn-acl-logging/0.log" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.487078 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mdvx2_87a92ec1-e2b0-407d-990e-ce52a980b64b/ovn-controller/0.log" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488284 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488322 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488332 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488340 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488348 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488356 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" exitCode=0 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488363 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" exitCode=143 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488371 4860 generic.go:334] "Generic (PLEG): container finished" podID="87a92ec1-e2b0-407d-990e-ce52a980b64b" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" exitCode=143 Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488394 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488445 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488455 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488468 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488481 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488493 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488499 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488506 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488513 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488520 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488526 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488533 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488539 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488545 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488554 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488562 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488570 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488577 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488596 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488602 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488609 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488615 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488621 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488627 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488633 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488641 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488650 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488658 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488664 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488671 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488678 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488683 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488690 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488698 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488704 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488710 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488718 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" event={"ID":"87a92ec1-e2b0-407d-990e-ce52a980b64b","Type":"ContainerDied","Data":"a0359c23fb4b3be298dd011d31a8e240dc19f6b215a2faf49d6ded851aea9021"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488727 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488734 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488740 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488746 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488751 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488758 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488764 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488770 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488776 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488783 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.488895 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdvx2" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.495235 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-xjn4j" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.512677 4860 scope.go:117] "RemoveContainer" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.562351 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.588216 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mdvx2"] Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.592779 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mdvx2"] Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.620763 4860 scope.go:117] "RemoveContainer" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.649943 4860 scope.go:117] "RemoveContainer" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.664674 4860 scope.go:117] "RemoveContainer" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.691345 4860 scope.go:117] "RemoveContainer" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.722430 4860 scope.go:117] "RemoveContainer" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.743696 4860 scope.go:117] "RemoveContainer" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.756291 4860 scope.go:117] "RemoveContainer" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.768120 4860 scope.go:117] "RemoveContainer" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.781243 4860 scope.go:117] "RemoveContainer" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.781687 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": container with ID starting with 75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de not found: ID does not exist" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.781717 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} err="failed to get container status \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": rpc error: code = NotFound desc = could not find container \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": container with ID starting with 75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.781738 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.782018 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": container with ID starting with 25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d not found: ID does not exist" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.782224 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} err="failed to get container status \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": rpc error: code = NotFound desc = could not find container \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": container with ID starting with 25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.782235 4860 scope.go:117] "RemoveContainer" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.782417 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": container with ID starting with 2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe not found: ID does not exist" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.782446 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} err="failed to get container status \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": rpc error: code = NotFound desc = could not find container \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": container with ID starting with 2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.782463 4860 scope.go:117] "RemoveContainer" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.782682 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": container with ID starting with 1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573 not found: ID does not exist" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.782698 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} err="failed to get container status \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": rpc error: code = NotFound desc = could not find container \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": container with ID starting with 1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.782710 4860 scope.go:117] "RemoveContainer" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.783626 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": container with ID starting with 522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86 not found: ID does not exist" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.783649 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} err="failed to get container status \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": rpc error: code = NotFound desc = could not find container \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": container with ID starting with 522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.783663 4860 scope.go:117] "RemoveContainer" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.783971 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": container with ID starting with 8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a not found: ID does not exist" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784014 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} err="failed to get container status \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": rpc error: code = NotFound desc = could not find container \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": container with ID starting with 8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784062 4860 scope.go:117] "RemoveContainer" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.784332 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": container with ID starting with ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1 not found: ID does not exist" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784354 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} err="failed to get container status \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": rpc error: code = NotFound desc = could not find container \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": container with ID starting with ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784370 4860 scope.go:117] "RemoveContainer" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.784600 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": container with ID starting with ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6 not found: ID does not exist" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784623 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} err="failed to get container status \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": rpc error: code = NotFound desc = could not find container \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": container with ID starting with ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784643 4860 scope.go:117] "RemoveContainer" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.784835 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": container with ID starting with b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c not found: ID does not exist" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784854 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} err="failed to get container status \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": rpc error: code = NotFound desc = could not find container \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": container with ID starting with b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.784869 4860 scope.go:117] "RemoveContainer" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" Oct 14 15:01:50 crc kubenswrapper[4860]: E1014 15:01:50.785069 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": container with ID starting with 721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a not found: ID does not exist" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785092 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} err="failed to get container status \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": rpc error: code = NotFound desc = could not find container \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": container with ID starting with 721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785107 4860 scope.go:117] "RemoveContainer" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785326 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} err="failed to get container status \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": rpc error: code = NotFound desc = could not find container \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": container with ID starting with 75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785345 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785529 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} err="failed to get container status \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": rpc error: code = NotFound desc = could not find container \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": container with ID starting with 25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785553 4860 scope.go:117] "RemoveContainer" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785813 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} err="failed to get container status \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": rpc error: code = NotFound desc = could not find container \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": container with ID starting with 2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.785840 4860 scope.go:117] "RemoveContainer" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.786181 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} err="failed to get container status \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": rpc error: code = NotFound desc = could not find container \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": container with ID starting with 1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.788417 4860 scope.go:117] "RemoveContainer" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.790145 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} err="failed to get container status \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": rpc error: code = NotFound desc = could not find container \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": container with ID starting with 522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.790182 4860 scope.go:117] "RemoveContainer" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.790569 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} err="failed to get container status \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": rpc error: code = NotFound desc = could not find container \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": container with ID starting with 8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.790602 4860 scope.go:117] "RemoveContainer" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.792744 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} err="failed to get container status \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": rpc error: code = NotFound desc = could not find container \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": container with ID starting with ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.792781 4860 scope.go:117] "RemoveContainer" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.794128 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} err="failed to get container status \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": rpc error: code = NotFound desc = could not find container \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": container with ID starting with ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.794151 4860 scope.go:117] "RemoveContainer" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.794398 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} err="failed to get container status \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": rpc error: code = NotFound desc = could not find container \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": container with ID starting with b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.794424 4860 scope.go:117] "RemoveContainer" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.795248 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} err="failed to get container status \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": rpc error: code = NotFound desc = could not find container \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": container with ID starting with 721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.795292 4860 scope.go:117] "RemoveContainer" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.796709 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} err="failed to get container status \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": rpc error: code = NotFound desc = could not find container \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": container with ID starting with 75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.796731 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.796955 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} err="failed to get container status \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": rpc error: code = NotFound desc = could not find container \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": container with ID starting with 25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.796980 4860 scope.go:117] "RemoveContainer" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.797393 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} err="failed to get container status \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": rpc error: code = NotFound desc = could not find container \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": container with ID starting with 2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.797418 4860 scope.go:117] "RemoveContainer" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.797646 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} err="failed to get container status \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": rpc error: code = NotFound desc = could not find container \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": container with ID starting with 1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.797677 4860 scope.go:117] "RemoveContainer" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798065 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} err="failed to get container status \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": rpc error: code = NotFound desc = could not find container \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": container with ID starting with 522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798092 4860 scope.go:117] "RemoveContainer" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798307 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} err="failed to get container status \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": rpc error: code = NotFound desc = could not find container \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": container with ID starting with 8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798334 4860 scope.go:117] "RemoveContainer" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798594 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} err="failed to get container status \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": rpc error: code = NotFound desc = could not find container \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": container with ID starting with ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798627 4860 scope.go:117] "RemoveContainer" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798880 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} err="failed to get container status \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": rpc error: code = NotFound desc = could not find container \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": container with ID starting with ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.798901 4860 scope.go:117] "RemoveContainer" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799144 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} err="failed to get container status \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": rpc error: code = NotFound desc = could not find container \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": container with ID starting with b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799176 4860 scope.go:117] "RemoveContainer" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799375 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} err="failed to get container status \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": rpc error: code = NotFound desc = could not find container \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": container with ID starting with 721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799392 4860 scope.go:117] "RemoveContainer" containerID="75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799596 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de"} err="failed to get container status \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": rpc error: code = NotFound desc = could not find container \"75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de\": container with ID starting with 75123b2b0b8242ae6fcf3875a7fccc30bc7360af2eddda403fafc21148bab2de not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799616 4860 scope.go:117] "RemoveContainer" containerID="25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799919 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d"} err="failed to get container status \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": rpc error: code = NotFound desc = could not find container \"25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d\": container with ID starting with 25012fb13c8dd5b7be08dc9839bf3d9d83aec2aa624b40bcf9b53df10eec303d not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.799937 4860 scope.go:117] "RemoveContainer" containerID="2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.800205 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe"} err="failed to get container status \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": rpc error: code = NotFound desc = could not find container \"2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe\": container with ID starting with 2c04a045314a9ff5efc5878fb4322afc3cbf6aeaa473a3866eac2bf9e77f47fe not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.800228 4860 scope.go:117] "RemoveContainer" containerID="1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.800496 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573"} err="failed to get container status \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": rpc error: code = NotFound desc = could not find container \"1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573\": container with ID starting with 1b5f779b6069e6c6c16717f81487a0522aee19ae43bf274a7376a697061c7573 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.800519 4860 scope.go:117] "RemoveContainer" containerID="522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.800758 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86"} err="failed to get container status \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": rpc error: code = NotFound desc = could not find container \"522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86\": container with ID starting with 522470a9e08ad6d850efdc75d8266835cfb7630cc7be5eabeb0133cfcfff6a86 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.800781 4860 scope.go:117] "RemoveContainer" containerID="8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801017 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a"} err="failed to get container status \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": rpc error: code = NotFound desc = could not find container \"8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a\": container with ID starting with 8fead3e505f12b65f4fc431d5b2c6638c11d918d7e0d1e351ec796db99ea9b7a not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801052 4860 scope.go:117] "RemoveContainer" containerID="ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801295 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1"} err="failed to get container status \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": rpc error: code = NotFound desc = could not find container \"ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1\": container with ID starting with ddbc6adebae96396ef7615afb13a39e01860743251d9b28fbfb76f0465c1c9c1 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801316 4860 scope.go:117] "RemoveContainer" containerID="ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801517 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6"} err="failed to get container status \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": rpc error: code = NotFound desc = could not find container \"ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6\": container with ID starting with ab19d308ae8ae602f1f938688f86618d59082c322e74a39b309a0b96d16045f6 not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801535 4860 scope.go:117] "RemoveContainer" containerID="b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801722 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c"} err="failed to get container status \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": rpc error: code = NotFound desc = could not find container \"b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c\": container with ID starting with b80635492c1aa00293e8ac58dfff38f8096fffdece24c54f0bcb202f08f9b29c not found: ID does not exist" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801741 4860 scope.go:117] "RemoveContainer" containerID="721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a" Oct 14 15:01:50 crc kubenswrapper[4860]: I1014 15:01:50.801942 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a"} err="failed to get container status \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": rpc error: code = NotFound desc = could not find container \"721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a\": container with ID starting with 721e01d7aa924db0b2f0230a764d3db4f21016572523254b249f4e55a0aac71a not found: ID does not exist" Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.071356 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a92ec1-e2b0-407d-990e-ce52a980b64b" path="/var/lib/kubelet/pods/87a92ec1-e2b0-407d-990e-ce52a980b64b/volumes" Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.494815 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dcr2g_ceb09eae-57c9-4a8e-95d5-aa40e49f7316/kube-multus/2.log" Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.494893 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dcr2g" event={"ID":"ceb09eae-57c9-4a8e-95d5-aa40e49f7316","Type":"ContainerStarted","Data":"9553df9c9c8508b9e605c85b29addd5dd2b4de6eafcfce1b61b4e03062a22319"} Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.498225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"4952158cccad6681be05784d773575077471bc464cb743fd46be4326b1750e36"} Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.498250 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"fd03bdbf8d10fb2af7fe87a85f8b743532d9e31196ffc2a3c45bedde1cb7f71f"} Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.498258 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"2fa9a561011b6128bf0914c358492f6ed5fb00ca90ca880f86f1b091b0618337"} Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.498285 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"507c5601f813b7b641e024e722df39118285c707377d80bc75b8b15c230b3e50"} Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.498293 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"74d22ce5cb606dfadad3313408db0c6ad69ffcc49441c87fb7173e974ff4c87f"} Oct 14 15:01:51 crc kubenswrapper[4860]: I1014 15:01:51.498301 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"a49a576d8c4299ec83da5e00205194cfa091d1c645c689c9c1fefc32c3c16d47"} Oct 14 15:01:53 crc kubenswrapper[4860]: I1014 15:01:53.513586 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"b37c910c8986805c4bfbe4b1063187c567e1254957462ac3bfe229328dcf87bd"} Oct 14 15:01:56 crc kubenswrapper[4860]: I1014 15:01:56.529464 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" event={"ID":"871569e0-a7c2-4890-b131-c4e9a2e43227","Type":"ContainerStarted","Data":"d3fde0ca9edde6d4caa6e85cc694b13aed605ec536de56ec524a1877877b4dbb"} Oct 14 15:01:56 crc kubenswrapper[4860]: I1014 15:01:56.529946 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:56 crc kubenswrapper[4860]: I1014 15:01:56.529975 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:56 crc kubenswrapper[4860]: I1014 15:01:56.556119 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" podStartSLOduration=7.556102828 podStartE2EDuration="7.556102828s" podCreationTimestamp="2025-10-14 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:01:56.553661038 +0000 UTC m=+778.140444487" watchObservedRunningTime="2025-10-14 15:01:56.556102828 +0000 UTC m=+778.142886267" Oct 14 15:01:56 crc kubenswrapper[4860]: I1014 15:01:56.558543 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:57 crc kubenswrapper[4860]: I1014 15:01:57.534208 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:01:57 crc kubenswrapper[4860]: I1014 15:01:57.562542 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:02:20 crc kubenswrapper[4860]: I1014 15:02:20.218497 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qdm4s" Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.834584 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg"] Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.836071 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.837928 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.844565 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg"] Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.934540 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.934588 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmkq8\" (UniqueName: \"kubernetes.io/projected/4c172442-19ed-484a-8404-5a5373f066e1-kube-api-access-zmkq8\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:31 crc kubenswrapper[4860]: I1014 15:02:31.934671 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.035887 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.035988 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.036016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmkq8\" (UniqueName: \"kubernetes.io/projected/4c172442-19ed-484a-8404-5a5373f066e1-kube-api-access-zmkq8\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.036387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.036470 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.054643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmkq8\" (UniqueName: \"kubernetes.io/projected/4c172442-19ed-484a-8404-5a5373f066e1-kube-api-access-zmkq8\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.254805 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.639509 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg"] Oct 14 15:02:32 crc kubenswrapper[4860]: I1014 15:02:32.750374 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" event={"ID":"4c172442-19ed-484a-8404-5a5373f066e1","Type":"ContainerStarted","Data":"ae949d8cff716ac18418b007966458babdd18d4662c19cfb195165ddeac4ed2b"} Oct 14 15:02:33 crc kubenswrapper[4860]: I1014 15:02:33.764330 4860 generic.go:334] "Generic (PLEG): container finished" podID="4c172442-19ed-484a-8404-5a5373f066e1" containerID="6ca5f22c4ea6bace63d4b3891a4f504cf798030b7625310c60841630001b596d" exitCode=0 Oct 14 15:02:33 crc kubenswrapper[4860]: I1014 15:02:33.764388 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" event={"ID":"4c172442-19ed-484a-8404-5a5373f066e1","Type":"ContainerDied","Data":"6ca5f22c4ea6bace63d4b3891a4f504cf798030b7625310c60841630001b596d"} Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.203509 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6cgm"] Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.204615 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.205642 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6cgm"] Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.372189 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-utilities\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.372286 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jhm\" (UniqueName: \"kubernetes.io/projected/1e8905c7-b5ff-433b-ac38-20b37eab0f27-kube-api-access-57jhm\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.372369 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-catalog-content\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.473659 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57jhm\" (UniqueName: \"kubernetes.io/projected/1e8905c7-b5ff-433b-ac38-20b37eab0f27-kube-api-access-57jhm\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.474063 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-catalog-content\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.474134 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-utilities\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.474773 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-catalog-content\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.474834 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-utilities\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.493212 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57jhm\" (UniqueName: \"kubernetes.io/projected/1e8905c7-b5ff-433b-ac38-20b37eab0f27-kube-api-access-57jhm\") pod \"redhat-operators-t6cgm\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.562389 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:34 crc kubenswrapper[4860]: I1014 15:02:34.977489 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6cgm"] Oct 14 15:02:34 crc kubenswrapper[4860]: W1014 15:02:34.984865 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e8905c7_b5ff_433b_ac38_20b37eab0f27.slice/crio-cb3e0b2f17c16cebad445036b4813949a136cf0dcdcca1d1e85ec1c793dbc902 WatchSource:0}: Error finding container cb3e0b2f17c16cebad445036b4813949a136cf0dcdcca1d1e85ec1c793dbc902: Status 404 returned error can't find the container with id cb3e0b2f17c16cebad445036b4813949a136cf0dcdcca1d1e85ec1c793dbc902 Oct 14 15:02:35 crc kubenswrapper[4860]: I1014 15:02:35.776591 4860 generic.go:334] "Generic (PLEG): container finished" podID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerID="c65bc713eccaedb361ca52dcea679e586d0eee390f73593a664b7e324a9d1bc8" exitCode=0 Oct 14 15:02:35 crc kubenswrapper[4860]: I1014 15:02:35.776639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6cgm" event={"ID":"1e8905c7-b5ff-433b-ac38-20b37eab0f27","Type":"ContainerDied","Data":"c65bc713eccaedb361ca52dcea679e586d0eee390f73593a664b7e324a9d1bc8"} Oct 14 15:02:35 crc kubenswrapper[4860]: I1014 15:02:35.776667 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6cgm" event={"ID":"1e8905c7-b5ff-433b-ac38-20b37eab0f27","Type":"ContainerStarted","Data":"cb3e0b2f17c16cebad445036b4813949a136cf0dcdcca1d1e85ec1c793dbc902"} Oct 14 15:02:36 crc kubenswrapper[4860]: I1014 15:02:36.785996 4860 generic.go:334] "Generic (PLEG): container finished" podID="4c172442-19ed-484a-8404-5a5373f066e1" containerID="f0fd6799290b8ead5e8292df62bf9139f3e478f73635fb0282e9b4cce012b740" exitCode=0 Oct 14 15:02:36 crc kubenswrapper[4860]: I1014 15:02:36.786061 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" event={"ID":"4c172442-19ed-484a-8404-5a5373f066e1","Type":"ContainerDied","Data":"f0fd6799290b8ead5e8292df62bf9139f3e478f73635fb0282e9b4cce012b740"} Oct 14 15:02:37 crc kubenswrapper[4860]: I1014 15:02:37.792464 4860 generic.go:334] "Generic (PLEG): container finished" podID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerID="4bbd73f33c652a77fee6c06e753188e8a357462e55396f82e0f859d786148374" exitCode=0 Oct 14 15:02:37 crc kubenswrapper[4860]: I1014 15:02:37.792560 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6cgm" event={"ID":"1e8905c7-b5ff-433b-ac38-20b37eab0f27","Type":"ContainerDied","Data":"4bbd73f33c652a77fee6c06e753188e8a357462e55396f82e0f859d786148374"} Oct 14 15:02:37 crc kubenswrapper[4860]: I1014 15:02:37.794421 4860 generic.go:334] "Generic (PLEG): container finished" podID="4c172442-19ed-484a-8404-5a5373f066e1" containerID="2746fdc88a2a7c775a5418176ff6d040673f34b54290707490a248849f36cb26" exitCode=0 Oct 14 15:02:37 crc kubenswrapper[4860]: I1014 15:02:37.794453 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" event={"ID":"4c172442-19ed-484a-8404-5a5373f066e1","Type":"ContainerDied","Data":"2746fdc88a2a7c775a5418176ff6d040673f34b54290707490a248849f36cb26"} Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.003819 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hkwxz"] Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.005559 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.012739 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkwxz"] Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.109446 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.146609 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65pj\" (UniqueName: \"kubernetes.io/projected/96856478-85d6-461e-9188-f8bff53f9b03-kube-api-access-c65pj\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.146658 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-catalog-content\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.146702 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-utilities\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.247708 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-bundle\") pod \"4c172442-19ed-484a-8404-5a5373f066e1\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.247840 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-util\") pod \"4c172442-19ed-484a-8404-5a5373f066e1\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.247944 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmkq8\" (UniqueName: \"kubernetes.io/projected/4c172442-19ed-484a-8404-5a5373f066e1-kube-api-access-zmkq8\") pod \"4c172442-19ed-484a-8404-5a5373f066e1\" (UID: \"4c172442-19ed-484a-8404-5a5373f066e1\") " Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.248200 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65pj\" (UniqueName: \"kubernetes.io/projected/96856478-85d6-461e-9188-f8bff53f9b03-kube-api-access-c65pj\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.248239 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-catalog-content\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.248275 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-bundle" (OuterVolumeSpecName: "bundle") pod "4c172442-19ed-484a-8404-5a5373f066e1" (UID: "4c172442-19ed-484a-8404-5a5373f066e1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.248326 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-utilities\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.248390 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.248793 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-catalog-content\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.249260 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-utilities\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.255206 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c172442-19ed-484a-8404-5a5373f066e1-kube-api-access-zmkq8" (OuterVolumeSpecName: "kube-api-access-zmkq8") pod "4c172442-19ed-484a-8404-5a5373f066e1" (UID: "4c172442-19ed-484a-8404-5a5373f066e1"). InnerVolumeSpecName "kube-api-access-zmkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.259824 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-util" (OuterVolumeSpecName: "util") pod "4c172442-19ed-484a-8404-5a5373f066e1" (UID: "4c172442-19ed-484a-8404-5a5373f066e1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.268292 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65pj\" (UniqueName: \"kubernetes.io/projected/96856478-85d6-461e-9188-f8bff53f9b03-kube-api-access-c65pj\") pod \"community-operators-hkwxz\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.349391 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c172442-19ed-484a-8404-5a5373f066e1-util\") on node \"crc\" DevicePath \"\"" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.349426 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmkq8\" (UniqueName: \"kubernetes.io/projected/4c172442-19ed-484a-8404-5a5373f066e1-kube-api-access-zmkq8\") on node \"crc\" DevicePath \"\"" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.406386 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.809376 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" event={"ID":"4c172442-19ed-484a-8404-5a5373f066e1","Type":"ContainerDied","Data":"ae949d8cff716ac18418b007966458babdd18d4662c19cfb195165ddeac4ed2b"} Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.809704 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae949d8cff716ac18418b007966458babdd18d4662c19cfb195165ddeac4ed2b" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.809458 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.811840 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6cgm" event={"ID":"1e8905c7-b5ff-433b-ac38-20b37eab0f27","Type":"ContainerStarted","Data":"7d5fda7130fa4c0aff20c8a89b89a5e62c9c1d103260cccafc163484ff7dbdcd"} Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.833041 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6cgm" podStartSLOduration=2.611545235 podStartE2EDuration="5.833005004s" podCreationTimestamp="2025-10-14 15:02:34 +0000 UTC" firstStartedPulling="2025-10-14 15:02:35.778692753 +0000 UTC m=+817.365476202" lastFinishedPulling="2025-10-14 15:02:39.000152502 +0000 UTC m=+820.586935971" observedRunningTime="2025-10-14 15:02:39.830626587 +0000 UTC m=+821.417410036" watchObservedRunningTime="2025-10-14 15:02:39.833005004 +0000 UTC m=+821.419788453" Oct 14 15:02:39 crc kubenswrapper[4860]: I1014 15:02:39.865368 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkwxz"] Oct 14 15:02:39 crc kubenswrapper[4860]: W1014 15:02:39.870516 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96856478_85d6_461e_9188_f8bff53f9b03.slice/crio-d2ecdfb0f59fac4547b37bbfc53614d35eb455ba04263c6d6ce9939f0f817c0e WatchSource:0}: Error finding container d2ecdfb0f59fac4547b37bbfc53614d35eb455ba04263c6d6ce9939f0f817c0e: Status 404 returned error can't find the container with id d2ecdfb0f59fac4547b37bbfc53614d35eb455ba04263c6d6ce9939f0f817c0e Oct 14 15:02:40 crc kubenswrapper[4860]: I1014 15:02:40.825440 4860 generic.go:334] "Generic (PLEG): container finished" podID="96856478-85d6-461e-9188-f8bff53f9b03" containerID="9710f9aaf6912547340ea353f21cfdddc33776ede1ef02a5f4c0390e3383bc29" exitCode=0 Oct 14 15:02:40 crc kubenswrapper[4860]: I1014 15:02:40.825499 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerDied","Data":"9710f9aaf6912547340ea353f21cfdddc33776ede1ef02a5f4c0390e3383bc29"} Oct 14 15:02:40 crc kubenswrapper[4860]: I1014 15:02:40.825543 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerStarted","Data":"d2ecdfb0f59fac4547b37bbfc53614d35eb455ba04263c6d6ce9939f0f817c0e"} Oct 14 15:02:41 crc kubenswrapper[4860]: I1014 15:02:41.831422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerStarted","Data":"c0749dbbb76284b533f75e79d3f3d3006b7a8d46275dff88faf64c36c7a45d85"} Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.427787 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz"] Oct 14 15:02:42 crc kubenswrapper[4860]: E1014 15:02:42.428004 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="util" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.428017 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="util" Oct 14 15:02:42 crc kubenswrapper[4860]: E1014 15:02:42.428053 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="extract" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.428060 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="extract" Oct 14 15:02:42 crc kubenswrapper[4860]: E1014 15:02:42.428077 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="pull" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.428084 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="pull" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.428180 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c172442-19ed-484a-8404-5a5373f066e1" containerName="extract" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.428528 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.431003 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lb2fr" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.432293 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.434368 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.440489 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz"] Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.500146 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5gn\" (UniqueName: \"kubernetes.io/projected/b7e911b9-3fd1-49b4-8716-70507a0b2aa4-kube-api-access-fm5gn\") pod \"nmstate-operator-858ddd8f98-zn5lz\" (UID: \"b7e911b9-3fd1-49b4-8716-70507a0b2aa4\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.601431 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5gn\" (UniqueName: \"kubernetes.io/projected/b7e911b9-3fd1-49b4-8716-70507a0b2aa4-kube-api-access-fm5gn\") pod \"nmstate-operator-858ddd8f98-zn5lz\" (UID: \"b7e911b9-3fd1-49b4-8716-70507a0b2aa4\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.621537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5gn\" (UniqueName: \"kubernetes.io/projected/b7e911b9-3fd1-49b4-8716-70507a0b2aa4-kube-api-access-fm5gn\") pod \"nmstate-operator-858ddd8f98-zn5lz\" (UID: \"b7e911b9-3fd1-49b4-8716-70507a0b2aa4\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.778534 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.837478 4860 generic.go:334] "Generic (PLEG): container finished" podID="96856478-85d6-461e-9188-f8bff53f9b03" containerID="c0749dbbb76284b533f75e79d3f3d3006b7a8d46275dff88faf64c36c7a45d85" exitCode=0 Oct 14 15:02:42 crc kubenswrapper[4860]: I1014 15:02:42.837639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerDied","Data":"c0749dbbb76284b533f75e79d3f3d3006b7a8d46275dff88faf64c36c7a45d85"} Oct 14 15:02:43 crc kubenswrapper[4860]: I1014 15:02:43.195848 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz"] Oct 14 15:02:43 crc kubenswrapper[4860]: I1014 15:02:43.849723 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" event={"ID":"b7e911b9-3fd1-49b4-8716-70507a0b2aa4","Type":"ContainerStarted","Data":"a7d05eb11d5c0e112759ee6438319f3f9666495afab1e912e6fdf8760544f45b"} Oct 14 15:02:44 crc kubenswrapper[4860]: I1014 15:02:44.563517 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:44 crc kubenswrapper[4860]: I1014 15:02:44.563596 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:44 crc kubenswrapper[4860]: I1014 15:02:44.603709 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:44 crc kubenswrapper[4860]: I1014 15:02:44.917405 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:46 crc kubenswrapper[4860]: I1014 15:02:46.384366 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6cgm"] Oct 14 15:02:46 crc kubenswrapper[4860]: I1014 15:02:46.866959 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6cgm" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="registry-server" containerID="cri-o://7d5fda7130fa4c0aff20c8a89b89a5e62c9c1d103260cccafc163484ff7dbdcd" gracePeriod=2 Oct 14 15:02:47 crc kubenswrapper[4860]: I1014 15:02:47.873310 4860 generic.go:334] "Generic (PLEG): container finished" podID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerID="7d5fda7130fa4c0aff20c8a89b89a5e62c9c1d103260cccafc163484ff7dbdcd" exitCode=0 Oct 14 15:02:47 crc kubenswrapper[4860]: I1014 15:02:47.873397 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6cgm" event={"ID":"1e8905c7-b5ff-433b-ac38-20b37eab0f27","Type":"ContainerDied","Data":"7d5fda7130fa4c0aff20c8a89b89a5e62c9c1d103260cccafc163484ff7dbdcd"} Oct 14 15:02:49 crc kubenswrapper[4860]: I1014 15:02:49.885805 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerStarted","Data":"86b01d30b8ce04f2034cd7e3eeceab71fbe11238ea2dab681b9455737ec02387"} Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.291262 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.401430 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57jhm\" (UniqueName: \"kubernetes.io/projected/1e8905c7-b5ff-433b-ac38-20b37eab0f27-kube-api-access-57jhm\") pod \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.401521 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-catalog-content\") pod \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.401686 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-utilities\") pod \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\" (UID: \"1e8905c7-b5ff-433b-ac38-20b37eab0f27\") " Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.402417 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-utilities" (OuterVolumeSpecName: "utilities") pod "1e8905c7-b5ff-433b-ac38-20b37eab0f27" (UID: "1e8905c7-b5ff-433b-ac38-20b37eab0f27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.414171 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8905c7-b5ff-433b-ac38-20b37eab0f27-kube-api-access-57jhm" (OuterVolumeSpecName: "kube-api-access-57jhm") pod "1e8905c7-b5ff-433b-ac38-20b37eab0f27" (UID: "1e8905c7-b5ff-433b-ac38-20b37eab0f27"). InnerVolumeSpecName "kube-api-access-57jhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.483977 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e8905c7-b5ff-433b-ac38-20b37eab0f27" (UID: "1e8905c7-b5ff-433b-ac38-20b37eab0f27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.503592 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.503793 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57jhm\" (UniqueName: \"kubernetes.io/projected/1e8905c7-b5ff-433b-ac38-20b37eab0f27-kube-api-access-57jhm\") on node \"crc\" DevicePath \"\"" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.503849 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8905c7-b5ff-433b-ac38-20b37eab0f27-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.892753 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6cgm" event={"ID":"1e8905c7-b5ff-433b-ac38-20b37eab0f27","Type":"ContainerDied","Data":"cb3e0b2f17c16cebad445036b4813949a136cf0dcdcca1d1e85ec1c793dbc902"} Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.892819 4860 scope.go:117] "RemoveContainer" containerID="7d5fda7130fa4c0aff20c8a89b89a5e62c9c1d103260cccafc163484ff7dbdcd" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.893887 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6cgm" Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.930155 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6cgm"] Oct 14 15:02:50 crc kubenswrapper[4860]: I1014 15:02:50.936093 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6cgm"] Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.002769 4860 scope.go:117] "RemoveContainer" containerID="4bbd73f33c652a77fee6c06e753188e8a357462e55396f82e0f859d786148374" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.025497 4860 scope.go:117] "RemoveContainer" containerID="c65bc713eccaedb361ca52dcea679e586d0eee390f73593a664b7e324a9d1bc8" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.067655 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" path="/var/lib/kubelet/pods/1e8905c7-b5ff-433b-ac38-20b37eab0f27/volumes" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.930541 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hkwxz" podStartSLOduration=7.675302072 podStartE2EDuration="13.930511565s" podCreationTimestamp="2025-10-14 15:02:38 +0000 UTC" firstStartedPulling="2025-10-14 15:02:40.827795953 +0000 UTC m=+822.414579402" lastFinishedPulling="2025-10-14 15:02:47.083005446 +0000 UTC m=+828.669788895" observedRunningTime="2025-10-14 15:02:51.924764245 +0000 UTC m=+833.511547694" watchObservedRunningTime="2025-10-14 15:02:51.930511565 +0000 UTC m=+833.517295054" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.997110 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k9jd8"] Oct 14 15:02:51 crc kubenswrapper[4860]: E1014 15:02:51.997302 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="extract-utilities" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.997314 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="extract-utilities" Oct 14 15:02:51 crc kubenswrapper[4860]: E1014 15:02:51.997324 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="registry-server" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.997330 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="registry-server" Oct 14 15:02:51 crc kubenswrapper[4860]: E1014 15:02:51.997340 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="extract-content" Oct 14 15:02:51 crc kubenswrapper[4860]: I1014 15:02:51.997346 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="extract-content" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.002113 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8905c7-b5ff-433b-ac38-20b37eab0f27" containerName="registry-server" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.002957 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.011449 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9jd8"] Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.128004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-utilities\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.128411 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-catalog-content\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.128500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgc2k\" (UniqueName: \"kubernetes.io/projected/2fdc501a-d834-461d-bef2-5c8e49751f2d-kube-api-access-jgc2k\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.229928 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgc2k\" (UniqueName: \"kubernetes.io/projected/2fdc501a-d834-461d-bef2-5c8e49751f2d-kube-api-access-jgc2k\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.229985 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-utilities\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.230054 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-catalog-content\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.230606 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-catalog-content\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.230627 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-utilities\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.265115 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgc2k\" (UniqueName: \"kubernetes.io/projected/2fdc501a-d834-461d-bef2-5c8e49751f2d-kube-api-access-jgc2k\") pod \"redhat-marketplace-k9jd8\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:52 crc kubenswrapper[4860]: I1014 15:02:52.328372 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.359778 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9jd8"] Oct 14 15:02:53 crc kubenswrapper[4860]: W1014 15:02:53.367483 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdc501a_d834_461d_bef2_5c8e49751f2d.slice/crio-4676ecf3ec396a69dea1b3288a09b77dd265cc109f43c36941abd6c2144acc16 WatchSource:0}: Error finding container 4676ecf3ec396a69dea1b3288a09b77dd265cc109f43c36941abd6c2144acc16: Status 404 returned error can't find the container with id 4676ecf3ec396a69dea1b3288a09b77dd265cc109f43c36941abd6c2144acc16 Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.589615 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dp7gz"] Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.590613 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.603539 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dp7gz"] Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.648369 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-utilities\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.648447 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-catalog-content\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.648604 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qptzm\" (UniqueName: \"kubernetes.io/projected/e78d9369-55a0-45c3-b850-e92917256af4-kube-api-access-qptzm\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.749558 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-utilities\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.749624 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-catalog-content\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.749674 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qptzm\" (UniqueName: \"kubernetes.io/projected/e78d9369-55a0-45c3-b850-e92917256af4-kube-api-access-qptzm\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.750232 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-catalog-content\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.750367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-utilities\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.766986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qptzm\" (UniqueName: \"kubernetes.io/projected/e78d9369-55a0-45c3-b850-e92917256af4-kube-api-access-qptzm\") pod \"certified-operators-dp7gz\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.913289 4860 generic.go:334] "Generic (PLEG): container finished" podID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerID="67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c" exitCode=0 Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.913360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9jd8" event={"ID":"2fdc501a-d834-461d-bef2-5c8e49751f2d","Type":"ContainerDied","Data":"67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c"} Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.913429 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:02:53 crc kubenswrapper[4860]: I1014 15:02:53.913464 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9jd8" event={"ID":"2fdc501a-d834-461d-bef2-5c8e49751f2d","Type":"ContainerStarted","Data":"4676ecf3ec396a69dea1b3288a09b77dd265cc109f43c36941abd6c2144acc16"} Oct 14 15:02:54 crc kubenswrapper[4860]: I1014 15:02:54.395583 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dp7gz"] Oct 14 15:02:54 crc kubenswrapper[4860]: W1014 15:02:54.402176 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode78d9369_55a0_45c3_b850_e92917256af4.slice/crio-b0b5cbf10df29a57fdc29cf290f8ac4626c1fffa7d679b5b4393d87523adbb3b WatchSource:0}: Error finding container b0b5cbf10df29a57fdc29cf290f8ac4626c1fffa7d679b5b4393d87523adbb3b: Status 404 returned error can't find the container with id b0b5cbf10df29a57fdc29cf290f8ac4626c1fffa7d679b5b4393d87523adbb3b Oct 14 15:02:54 crc kubenswrapper[4860]: I1014 15:02:54.920383 4860 generic.go:334] "Generic (PLEG): container finished" podID="e78d9369-55a0-45c3-b850-e92917256af4" containerID="7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f" exitCode=0 Oct 14 15:02:54 crc kubenswrapper[4860]: I1014 15:02:54.920533 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dp7gz" event={"ID":"e78d9369-55a0-45c3-b850-e92917256af4","Type":"ContainerDied","Data":"7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f"} Oct 14 15:02:54 crc kubenswrapper[4860]: I1014 15:02:54.920763 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dp7gz" event={"ID":"e78d9369-55a0-45c3-b850-e92917256af4","Type":"ContainerStarted","Data":"b0b5cbf10df29a57fdc29cf290f8ac4626c1fffa7d679b5b4393d87523adbb3b"} Oct 14 15:02:57 crc kubenswrapper[4860]: I1014 15:02:57.938467 4860 generic.go:334] "Generic (PLEG): container finished" podID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerID="c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e" exitCode=0 Oct 14 15:02:57 crc kubenswrapper[4860]: I1014 15:02:57.938531 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9jd8" event={"ID":"2fdc501a-d834-461d-bef2-5c8e49751f2d","Type":"ContainerDied","Data":"c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e"} Oct 14 15:02:57 crc kubenswrapper[4860]: I1014 15:02:57.941655 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" event={"ID":"b7e911b9-3fd1-49b4-8716-70507a0b2aa4","Type":"ContainerStarted","Data":"303142998a148c11c12505b1c16e3439ffa88dd236e4e3a37a6051df75dab426"} Oct 14 15:02:57 crc kubenswrapper[4860]: I1014 15:02:57.990783 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zn5lz" podStartSLOduration=2.242381604 podStartE2EDuration="15.99075788s" podCreationTimestamp="2025-10-14 15:02:42 +0000 UTC" firstStartedPulling="2025-10-14 15:02:43.205398048 +0000 UTC m=+824.792181497" lastFinishedPulling="2025-10-14 15:02:56.953774314 +0000 UTC m=+838.540557773" observedRunningTime="2025-10-14 15:02:57.987373738 +0000 UTC m=+839.574157227" watchObservedRunningTime="2025-10-14 15:02:57.99075788 +0000 UTC m=+839.577541369" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.890390 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d"] Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.891413 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.893384 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ndr2h" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.933667 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt"] Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.934767 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.957167 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.960292 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286xt\" (UniqueName: \"kubernetes.io/projected/9bef2f42-22a8-4cde-8267-c890543fe82e-kube-api-access-286xt\") pod \"nmstate-metrics-fdff9cb8d-t966d\" (UID: \"9bef2f42-22a8-4cde-8267-c890543fe82e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.962117 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wjlnp"] Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.962972 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.965169 4860 generic.go:334] "Generic (PLEG): container finished" podID="e78d9369-55a0-45c3-b850-e92917256af4" containerID="1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03" exitCode=0 Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.965682 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dp7gz" event={"ID":"e78d9369-55a0-45c3-b850-e92917256af4","Type":"ContainerDied","Data":"1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03"} Oct 14 15:02:58 crc kubenswrapper[4860]: I1014 15:02:58.967500 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt"] Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.004447 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d"] Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069560 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-dbus-socket\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069616 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286xt\" (UniqueName: \"kubernetes.io/projected/9bef2f42-22a8-4cde-8267-c890543fe82e-kube-api-access-286xt\") pod \"nmstate-metrics-fdff9cb8d-t966d\" (UID: \"9bef2f42-22a8-4cde-8267-c890543fe82e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069662 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-nmstate-lock\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069732 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldww6\" (UniqueName: \"kubernetes.io/projected/c941e868-49fb-4e89-896a-50f0dbbfe71b-kube-api-access-ldww6\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069805 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c941e868-49fb-4e89-896a-50f0dbbfe71b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069831 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-ovs-socket\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.069867 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmk7w\" (UniqueName: \"kubernetes.io/projected/3e969961-ebc3-4830-b52c-bbeb744ea07e-kube-api-access-qmk7w\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.089247 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7"] Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.089873 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.094405 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-p9f4c" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.094651 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.094797 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.116759 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7"] Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.125688 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286xt\" (UniqueName: \"kubernetes.io/projected/9bef2f42-22a8-4cde-8267-c890543fe82e-kube-api-access-286xt\") pod \"nmstate-metrics-fdff9cb8d-t966d\" (UID: \"9bef2f42-22a8-4cde-8267-c890543fe82e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.170932 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmk7w\" (UniqueName: \"kubernetes.io/projected/3e969961-ebc3-4830-b52c-bbeb744ea07e-kube-api-access-qmk7w\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.170993 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-dbus-socket\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171046 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgcw\" (UniqueName: \"kubernetes.io/projected/e5be091b-1de9-4a04-80b5-68ddf4fc73da-kube-api-access-pqgcw\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171090 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-nmstate-lock\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171141 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e5be091b-1de9-4a04-80b5-68ddf4fc73da-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171168 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5be091b-1de9-4a04-80b5-68ddf4fc73da-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171188 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldww6\" (UniqueName: \"kubernetes.io/projected/c941e868-49fb-4e89-896a-50f0dbbfe71b-kube-api-access-ldww6\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171228 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c941e868-49fb-4e89-896a-50f0dbbfe71b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171245 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-ovs-socket\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-dbus-socket\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.171474 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-nmstate-lock\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.172270 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e969961-ebc3-4830-b52c-bbeb744ea07e-ovs-socket\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.175019 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 14 15:02:59 crc kubenswrapper[4860]: E1014 15:02:59.183157 4860 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 14 15:02:59 crc kubenswrapper[4860]: E1014 15:02:59.183442 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c941e868-49fb-4e89-896a-50f0dbbfe71b-tls-key-pair podName:c941e868-49fb-4e89-896a-50f0dbbfe71b nodeName:}" failed. No retries permitted until 2025-10-14 15:02:59.683418887 +0000 UTC m=+841.270202336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c941e868-49fb-4e89-896a-50f0dbbfe71b-tls-key-pair") pod "nmstate-webhook-6cdbc54649-c6bxt" (UID: "c941e868-49fb-4e89-896a-50f0dbbfe71b") : secret "openshift-nmstate-webhook" not found Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.190621 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmk7w\" (UniqueName: \"kubernetes.io/projected/3e969961-ebc3-4830-b52c-bbeb744ea07e-kube-api-access-qmk7w\") pod \"nmstate-handler-wjlnp\" (UID: \"3e969961-ebc3-4830-b52c-bbeb744ea07e\") " pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.190902 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldww6\" (UniqueName: \"kubernetes.io/projected/c941e868-49fb-4e89-896a-50f0dbbfe71b-kube-api-access-ldww6\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.262711 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ndr2h" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.271184 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.272751 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqgcw\" (UniqueName: \"kubernetes.io/projected/e5be091b-1de9-4a04-80b5-68ddf4fc73da-kube-api-access-pqgcw\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.272955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e5be091b-1de9-4a04-80b5-68ddf4fc73da-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.273104 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5be091b-1de9-4a04-80b5-68ddf4fc73da-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.273996 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e5be091b-1de9-4a04-80b5-68ddf4fc73da-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.276996 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5be091b-1de9-4a04-80b5-68ddf4fc73da-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.286678 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.300155 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6456ccfc99-4vwxh"] Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.301012 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.302052 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqgcw\" (UniqueName: \"kubernetes.io/projected/e5be091b-1de9-4a04-80b5-68ddf4fc73da-kube-api-access-pqgcw\") pod \"nmstate-console-plugin-6b874cbd85-stgb7\" (UID: \"e5be091b-1de9-4a04-80b5-68ddf4fc73da\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.316483 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6456ccfc99-4vwxh"] Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.407133 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.407311 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:59 crc kubenswrapper[4860]: W1014 15:02:59.415825 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e969961_ebc3_4830_b52c_bbeb744ea07e.slice/crio-7f59707e98a51051cbf07ba7d769e89cf54e539df041bbcbd6ba7d26e955a33e WatchSource:0}: Error finding container 7f59707e98a51051cbf07ba7d769e89cf54e539df041bbcbd6ba7d26e955a33e: Status 404 returned error can't find the container with id 7f59707e98a51051cbf07ba7d769e89cf54e539df041bbcbd6ba7d26e955a33e Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.439225 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.467340 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475254 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qznkg\" (UniqueName: \"kubernetes.io/projected/0e82623d-718b-4f3e-9af7-7a1be209a7c9-kube-api-access-qznkg\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475288 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-config\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475337 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-serving-cert\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475367 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-oauth-serving-cert\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475383 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-oauth-config\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475399 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-trusted-ca-bundle\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.475415 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-service-ca\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.577531 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-oauth-serving-cert\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.577916 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-oauth-config\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.577943 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-trusted-ca-bundle\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.577968 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-service-ca\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.578122 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qznkg\" (UniqueName: \"kubernetes.io/projected/0e82623d-718b-4f3e-9af7-7a1be209a7c9-kube-api-access-qznkg\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.578158 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-config\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.578224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-serving-cert\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.579828 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-service-ca\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.580002 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-trusted-ca-bundle\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.580185 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-config\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.580306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e82623d-718b-4f3e-9af7-7a1be209a7c9-oauth-serving-cert\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.585108 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-oauth-config\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.588016 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e82623d-718b-4f3e-9af7-7a1be209a7c9-console-serving-cert\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.597854 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qznkg\" (UniqueName: \"kubernetes.io/projected/0e82623d-718b-4f3e-9af7-7a1be209a7c9-kube-api-access-qznkg\") pod \"console-6456ccfc99-4vwxh\" (UID: \"0e82623d-718b-4f3e-9af7-7a1be209a7c9\") " pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.631701 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.781786 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c941e868-49fb-4e89-896a-50f0dbbfe71b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.785586 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c941e868-49fb-4e89-896a-50f0dbbfe71b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c6bxt\" (UID: \"c941e868-49fb-4e89-896a-50f0dbbfe71b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.832433 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d"] Oct 14 15:02:59 crc kubenswrapper[4860]: W1014 15:02:59.843823 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bef2f42_22a8_4cde_8267_c890543fe82e.slice/crio-e12ff186028c522fd79cf1f5600109e0c6637c7d4e9c15dfaf1e4e0bd9d7c936 WatchSource:0}: Error finding container e12ff186028c522fd79cf1f5600109e0c6637c7d4e9c15dfaf1e4e0bd9d7c936: Status 404 returned error can't find the container with id e12ff186028c522fd79cf1f5600109e0c6637c7d4e9c15dfaf1e4e0bd9d7c936 Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.876522 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.905005 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7"] Oct 14 15:02:59 crc kubenswrapper[4860]: W1014 15:02:59.918332 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5be091b_1de9_4a04_80b5_68ddf4fc73da.slice/crio-fa3a840584e8502e3ee6a027830c1b4b54f59f94759138a4d84bc3850a9bfd10 WatchSource:0}: Error finding container fa3a840584e8502e3ee6a027830c1b4b54f59f94759138a4d84bc3850a9bfd10: Status 404 returned error can't find the container with id fa3a840584e8502e3ee6a027830c1b4b54f59f94759138a4d84bc3850a9bfd10 Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.974655 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9jd8" event={"ID":"2fdc501a-d834-461d-bef2-5c8e49751f2d","Type":"ContainerStarted","Data":"afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b"} Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.976540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" event={"ID":"9bef2f42-22a8-4cde-8267-c890543fe82e","Type":"ContainerStarted","Data":"e12ff186028c522fd79cf1f5600109e0c6637c7d4e9c15dfaf1e4e0bd9d7c936"} Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.977852 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wjlnp" event={"ID":"3e969961-ebc3-4830-b52c-bbeb744ea07e","Type":"ContainerStarted","Data":"7f59707e98a51051cbf07ba7d769e89cf54e539df041bbcbd6ba7d26e955a33e"} Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.980439 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" event={"ID":"e5be091b-1de9-4a04-80b5-68ddf4fc73da","Type":"ContainerStarted","Data":"fa3a840584e8502e3ee6a027830c1b4b54f59f94759138a4d84bc3850a9bfd10"} Oct 14 15:02:59 crc kubenswrapper[4860]: I1014 15:02:59.996117 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k9jd8" podStartSLOduration=3.5206863459999997 podStartE2EDuration="8.996077128s" podCreationTimestamp="2025-10-14 15:02:51 +0000 UTC" firstStartedPulling="2025-10-14 15:02:53.915273702 +0000 UTC m=+835.502057151" lastFinishedPulling="2025-10-14 15:02:59.390664484 +0000 UTC m=+840.977447933" observedRunningTime="2025-10-14 15:02:59.995792931 +0000 UTC m=+841.582576380" watchObservedRunningTime="2025-10-14 15:02:59.996077128 +0000 UTC m=+841.582860577" Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.030625 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.146136 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6456ccfc99-4vwxh"] Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.308405 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt"] Oct 14 15:03:00 crc kubenswrapper[4860]: W1014 15:03:00.315464 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc941e868_49fb_4e89_896a_50f0dbbfe71b.slice/crio-b45888ebfd3f3d1beb57cec835fc3ead7ea20d0bb6091f668f6b8c4cb6878bdb WatchSource:0}: Error finding container b45888ebfd3f3d1beb57cec835fc3ead7ea20d0bb6091f668f6b8c4cb6878bdb: Status 404 returned error can't find the container with id b45888ebfd3f3d1beb57cec835fc3ead7ea20d0bb6091f668f6b8c4cb6878bdb Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.989863 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dp7gz" event={"ID":"e78d9369-55a0-45c3-b850-e92917256af4","Type":"ContainerStarted","Data":"04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1"} Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.993011 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6456ccfc99-4vwxh" event={"ID":"0e82623d-718b-4f3e-9af7-7a1be209a7c9","Type":"ContainerStarted","Data":"2a872887cc3d7fceac421065db61b88401306c5641cd62d5fe9b656ddfc8d542"} Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.993063 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6456ccfc99-4vwxh" event={"ID":"0e82623d-718b-4f3e-9af7-7a1be209a7c9","Type":"ContainerStarted","Data":"28c7c2eef54bd25b1aea54221ebc51a941e5ea5b7f4d74f03466ed172318e41c"} Oct 14 15:03:00 crc kubenswrapper[4860]: I1014 15:03:00.994178 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" event={"ID":"c941e868-49fb-4e89-896a-50f0dbbfe71b","Type":"ContainerStarted","Data":"b45888ebfd3f3d1beb57cec835fc3ead7ea20d0bb6091f668f6b8c4cb6878bdb"} Oct 14 15:03:01 crc kubenswrapper[4860]: I1014 15:03:01.017566 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dp7gz" podStartSLOduration=4.969024091 podStartE2EDuration="8.017546656s" podCreationTimestamp="2025-10-14 15:02:53 +0000 UTC" firstStartedPulling="2025-10-14 15:02:56.783268962 +0000 UTC m=+838.370052421" lastFinishedPulling="2025-10-14 15:02:59.831791537 +0000 UTC m=+841.418574986" observedRunningTime="2025-10-14 15:03:01.011240042 +0000 UTC m=+842.598023511" watchObservedRunningTime="2025-10-14 15:03:01.017546656 +0000 UTC m=+842.604330105" Oct 14 15:03:01 crc kubenswrapper[4860]: I1014 15:03:01.032314 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6456ccfc99-4vwxh" podStartSLOduration=2.032293384 podStartE2EDuration="2.032293384s" podCreationTimestamp="2025-10-14 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:03:01.029627029 +0000 UTC m=+842.616410498" watchObservedRunningTime="2025-10-14 15:03:01.032293384 +0000 UTC m=+842.619076833" Oct 14 15:03:02 crc kubenswrapper[4860]: I1014 15:03:02.329302 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:03:02 crc kubenswrapper[4860]: I1014 15:03:02.329338 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:03:02 crc kubenswrapper[4860]: I1014 15:03:02.372091 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:03:02 crc kubenswrapper[4860]: I1014 15:03:02.381106 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkwxz"] Oct 14 15:03:03 crc kubenswrapper[4860]: I1014 15:03:03.005921 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hkwxz" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="registry-server" containerID="cri-o://86b01d30b8ce04f2034cd7e3eeceab71fbe11238ea2dab681b9455737ec02387" gracePeriod=2 Oct 14 15:03:03 crc kubenswrapper[4860]: I1014 15:03:03.913985 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:03:03 crc kubenswrapper[4860]: I1014 15:03:03.914277 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:03:03 crc kubenswrapper[4860]: I1014 15:03:03.978822 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.015444 4860 generic.go:334] "Generic (PLEG): container finished" podID="96856478-85d6-461e-9188-f8bff53f9b03" containerID="86b01d30b8ce04f2034cd7e3eeceab71fbe11238ea2dab681b9455737ec02387" exitCode=0 Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.016488 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerDied","Data":"86b01d30b8ce04f2034cd7e3eeceab71fbe11238ea2dab681b9455737ec02387"} Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.071883 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.108518 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.242404 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-catalog-content\") pod \"96856478-85d6-461e-9188-f8bff53f9b03\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.242523 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-utilities\") pod \"96856478-85d6-461e-9188-f8bff53f9b03\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.242545 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c65pj\" (UniqueName: \"kubernetes.io/projected/96856478-85d6-461e-9188-f8bff53f9b03-kube-api-access-c65pj\") pod \"96856478-85d6-461e-9188-f8bff53f9b03\" (UID: \"96856478-85d6-461e-9188-f8bff53f9b03\") " Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.243794 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-utilities" (OuterVolumeSpecName: "utilities") pod "96856478-85d6-461e-9188-f8bff53f9b03" (UID: "96856478-85d6-461e-9188-f8bff53f9b03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.247186 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96856478-85d6-461e-9188-f8bff53f9b03-kube-api-access-c65pj" (OuterVolumeSpecName: "kube-api-access-c65pj") pod "96856478-85d6-461e-9188-f8bff53f9b03" (UID: "96856478-85d6-461e-9188-f8bff53f9b03"). InnerVolumeSpecName "kube-api-access-c65pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.288640 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96856478-85d6-461e-9188-f8bff53f9b03" (UID: "96856478-85d6-461e-9188-f8bff53f9b03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.343861 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.343892 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c65pj\" (UniqueName: \"kubernetes.io/projected/96856478-85d6-461e-9188-f8bff53f9b03-kube-api-access-c65pj\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.343905 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96856478-85d6-461e-9188-f8bff53f9b03-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:04 crc kubenswrapper[4860]: I1014 15:03:04.780329 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dp7gz"] Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.024800 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwxz" event={"ID":"96856478-85d6-461e-9188-f8bff53f9b03","Type":"ContainerDied","Data":"d2ecdfb0f59fac4547b37bbfc53614d35eb455ba04263c6d6ce9939f0f817c0e"} Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.024818 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwxz" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.024859 4860 scope.go:117] "RemoveContainer" containerID="86b01d30b8ce04f2034cd7e3eeceab71fbe11238ea2dab681b9455737ec02387" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.026044 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" event={"ID":"e5be091b-1de9-4a04-80b5-68ddf4fc73da","Type":"ContainerStarted","Data":"55d20493fce7a9b31be3d4df3c3a05e427bbe3c7b3ce328d06085635345cf848"} Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.027656 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" event={"ID":"9bef2f42-22a8-4cde-8267-c890543fe82e","Type":"ContainerStarted","Data":"accda3d95c16d5e7690bc814f5c62b60867db62698162da08785a81aee05f5a2"} Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.029112 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" event={"ID":"c941e868-49fb-4e89-896a-50f0dbbfe71b","Type":"ContainerStarted","Data":"a0e74a3a12dafe950de46e1f6cdf532a6968ce87f99284d2cf2993f4d8db0819"} Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.029341 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.033307 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wjlnp" event={"ID":"3e969961-ebc3-4830-b52c-bbeb744ea07e","Type":"ContainerStarted","Data":"105db6b26dcd910fd3a2de039c875b43d9c0c5e0e96c77b60f777de705123397"} Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.048380 4860 scope.go:117] "RemoveContainer" containerID="c0749dbbb76284b533f75e79d3f3d3006b7a8d46275dff88faf64c36c7a45d85" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.048594 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" podStartSLOduration=3.455634109 podStartE2EDuration="7.048581273s" podCreationTimestamp="2025-10-14 15:02:58 +0000 UTC" firstStartedPulling="2025-10-14 15:03:00.317974118 +0000 UTC m=+841.904757567" lastFinishedPulling="2025-10-14 15:03:03.910921282 +0000 UTC m=+845.497704731" observedRunningTime="2025-10-14 15:03:05.04556726 +0000 UTC m=+846.632350729" watchObservedRunningTime="2025-10-14 15:03:05.048581273 +0000 UTC m=+846.635364732" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.070457 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-stgb7" podStartSLOduration=2.110359914 podStartE2EDuration="6.070441773s" podCreationTimestamp="2025-10-14 15:02:59 +0000 UTC" firstStartedPulling="2025-10-14 15:02:59.920844996 +0000 UTC m=+841.507628445" lastFinishedPulling="2025-10-14 15:03:03.880926855 +0000 UTC m=+845.467710304" observedRunningTime="2025-10-14 15:03:05.068141908 +0000 UTC m=+846.654925367" watchObservedRunningTime="2025-10-14 15:03:05.070441773 +0000 UTC m=+846.657225222" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.101845 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkwxz"] Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.107384 4860 scope.go:117] "RemoveContainer" containerID="9710f9aaf6912547340ea353f21cfdddc33776ede1ef02a5f4c0390e3383bc29" Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.115816 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hkwxz"] Oct 14 15:03:05 crc kubenswrapper[4860]: I1014 15:03:05.131886 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wjlnp" podStartSLOduration=2.668685932 podStartE2EDuration="7.131870544s" podCreationTimestamp="2025-10-14 15:02:58 +0000 UTC" firstStartedPulling="2025-10-14 15:02:59.418934302 +0000 UTC m=+841.005717741" lastFinishedPulling="2025-10-14 15:03:03.882118894 +0000 UTC m=+845.468902353" observedRunningTime="2025-10-14 15:03:05.126606867 +0000 UTC m=+846.713390316" watchObservedRunningTime="2025-10-14 15:03:05.131870544 +0000 UTC m=+846.718653993" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.041623 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.042729 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dp7gz" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="registry-server" containerID="cri-o://04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1" gracePeriod=2 Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.734199 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.880186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-utilities\") pod \"e78d9369-55a0-45c3-b850-e92917256af4\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.880303 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qptzm\" (UniqueName: \"kubernetes.io/projected/e78d9369-55a0-45c3-b850-e92917256af4-kube-api-access-qptzm\") pod \"e78d9369-55a0-45c3-b850-e92917256af4\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.880430 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-catalog-content\") pod \"e78d9369-55a0-45c3-b850-e92917256af4\" (UID: \"e78d9369-55a0-45c3-b850-e92917256af4\") " Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.881146 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-utilities" (OuterVolumeSpecName: "utilities") pod "e78d9369-55a0-45c3-b850-e92917256af4" (UID: "e78d9369-55a0-45c3-b850-e92917256af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.885934 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78d9369-55a0-45c3-b850-e92917256af4-kube-api-access-qptzm" (OuterVolumeSpecName: "kube-api-access-qptzm") pod "e78d9369-55a0-45c3-b850-e92917256af4" (UID: "e78d9369-55a0-45c3-b850-e92917256af4"). InnerVolumeSpecName "kube-api-access-qptzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.930172 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e78d9369-55a0-45c3-b850-e92917256af4" (UID: "e78d9369-55a0-45c3-b850-e92917256af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.982072 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.982101 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78d9369-55a0-45c3-b850-e92917256af4-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:06 crc kubenswrapper[4860]: I1014 15:03:06.982112 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qptzm\" (UniqueName: \"kubernetes.io/projected/e78d9369-55a0-45c3-b850-e92917256af4-kube-api-access-qptzm\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.048312 4860 generic.go:334] "Generic (PLEG): container finished" podID="e78d9369-55a0-45c3-b850-e92917256af4" containerID="04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1" exitCode=0 Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.048450 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dp7gz" event={"ID":"e78d9369-55a0-45c3-b850-e92917256af4","Type":"ContainerDied","Data":"04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1"} Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.048477 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dp7gz" event={"ID":"e78d9369-55a0-45c3-b850-e92917256af4","Type":"ContainerDied","Data":"b0b5cbf10df29a57fdc29cf290f8ac4626c1fffa7d679b5b4393d87523adbb3b"} Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.048492 4860 scope.go:117] "RemoveContainer" containerID="04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.048576 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dp7gz" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.072477 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96856478-85d6-461e-9188-f8bff53f9b03" path="/var/lib/kubelet/pods/96856478-85d6-461e-9188-f8bff53f9b03/volumes" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.075973 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" event={"ID":"9bef2f42-22a8-4cde-8267-c890543fe82e","Type":"ContainerStarted","Data":"587a86c38c824961d0e8639445c05bc02e3101e60da81ba7a741cae1ff7538aa"} Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.086328 4860 scope.go:117] "RemoveContainer" containerID="1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.096741 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-t966d" podStartSLOduration=2.188341583 podStartE2EDuration="9.096724761s" podCreationTimestamp="2025-10-14 15:02:58 +0000 UTC" firstStartedPulling="2025-10-14 15:02:59.845652025 +0000 UTC m=+841.432435474" lastFinishedPulling="2025-10-14 15:03:06.754035203 +0000 UTC m=+848.340818652" observedRunningTime="2025-10-14 15:03:07.094924586 +0000 UTC m=+848.681708055" watchObservedRunningTime="2025-10-14 15:03:07.096724761 +0000 UTC m=+848.683508220" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.112093 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dp7gz"] Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.122394 4860 scope.go:117] "RemoveContainer" containerID="7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.129233 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dp7gz"] Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.139549 4860 scope.go:117] "RemoveContainer" containerID="04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1" Oct 14 15:03:07 crc kubenswrapper[4860]: E1014 15:03:07.140248 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1\": container with ID starting with 04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1 not found: ID does not exist" containerID="04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.140363 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1"} err="failed to get container status \"04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1\": rpc error: code = NotFound desc = could not find container \"04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1\": container with ID starting with 04b333e6d82c2dca5ffc78ec5214a7a6e6280c46459c14ca74aedb663dd5c1f1 not found: ID does not exist" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.140454 4860 scope.go:117] "RemoveContainer" containerID="1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03" Oct 14 15:03:07 crc kubenswrapper[4860]: E1014 15:03:07.140922 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03\": container with ID starting with 1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03 not found: ID does not exist" containerID="1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.140978 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03"} err="failed to get container status \"1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03\": rpc error: code = NotFound desc = could not find container \"1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03\": container with ID starting with 1bf903c39d9a1a70440f23ec8aab6e5c8e34b2501de294a537fe4127ddfc9b03 not found: ID does not exist" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.141015 4860 scope.go:117] "RemoveContainer" containerID="7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f" Oct 14 15:03:07 crc kubenswrapper[4860]: E1014 15:03:07.141492 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f\": container with ID starting with 7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f not found: ID does not exist" containerID="7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f" Oct 14 15:03:07 crc kubenswrapper[4860]: I1014 15:03:07.141591 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f"} err="failed to get container status \"7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f\": rpc error: code = NotFound desc = could not find container \"7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f\": container with ID starting with 7d85c241bbb41de867607c306145d5ca3093c7f939460502b8a15b28ee9b1a2f not found: ID does not exist" Oct 14 15:03:09 crc kubenswrapper[4860]: I1014 15:03:09.068440 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78d9369-55a0-45c3-b850-e92917256af4" path="/var/lib/kubelet/pods/e78d9369-55a0-45c3-b850-e92917256af4/volumes" Oct 14 15:03:09 crc kubenswrapper[4860]: I1014 15:03:09.319498 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wjlnp" Oct 14 15:03:09 crc kubenswrapper[4860]: I1014 15:03:09.631783 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:03:09 crc kubenswrapper[4860]: I1014 15:03:09.632562 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:03:09 crc kubenswrapper[4860]: I1014 15:03:09.636790 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:03:10 crc kubenswrapper[4860]: I1014 15:03:10.093518 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6456ccfc99-4vwxh" Oct 14 15:03:10 crc kubenswrapper[4860]: I1014 15:03:10.146582 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sr5b4"] Oct 14 15:03:12 crc kubenswrapper[4860]: I1014 15:03:12.368135 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:03:12 crc kubenswrapper[4860]: I1014 15:03:12.412436 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9jd8"] Oct 14 15:03:13 crc kubenswrapper[4860]: I1014 15:03:13.109118 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k9jd8" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="registry-server" containerID="cri-o://afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b" gracePeriod=2 Oct 14 15:03:13 crc kubenswrapper[4860]: I1014 15:03:13.985423 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.073928 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgc2k\" (UniqueName: \"kubernetes.io/projected/2fdc501a-d834-461d-bef2-5c8e49751f2d-kube-api-access-jgc2k\") pod \"2fdc501a-d834-461d-bef2-5c8e49751f2d\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.074098 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-catalog-content\") pod \"2fdc501a-d834-461d-bef2-5c8e49751f2d\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.074150 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-utilities\") pod \"2fdc501a-d834-461d-bef2-5c8e49751f2d\" (UID: \"2fdc501a-d834-461d-bef2-5c8e49751f2d\") " Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.075515 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-utilities" (OuterVolumeSpecName: "utilities") pod "2fdc501a-d834-461d-bef2-5c8e49751f2d" (UID: "2fdc501a-d834-461d-bef2-5c8e49751f2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.079404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdc501a-d834-461d-bef2-5c8e49751f2d-kube-api-access-jgc2k" (OuterVolumeSpecName: "kube-api-access-jgc2k") pod "2fdc501a-d834-461d-bef2-5c8e49751f2d" (UID: "2fdc501a-d834-461d-bef2-5c8e49751f2d"). InnerVolumeSpecName "kube-api-access-jgc2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.089395 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fdc501a-d834-461d-bef2-5c8e49751f2d" (UID: "2fdc501a-d834-461d-bef2-5c8e49751f2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.116151 4860 generic.go:334] "Generic (PLEG): container finished" podID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerID="afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b" exitCode=0 Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.116204 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9jd8" event={"ID":"2fdc501a-d834-461d-bef2-5c8e49751f2d","Type":"ContainerDied","Data":"afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b"} Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.116225 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9jd8" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.116255 4860 scope.go:117] "RemoveContainer" containerID="afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.116238 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9jd8" event={"ID":"2fdc501a-d834-461d-bef2-5c8e49751f2d","Type":"ContainerDied","Data":"4676ecf3ec396a69dea1b3288a09b77dd265cc109f43c36941abd6c2144acc16"} Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.130715 4860 scope.go:117] "RemoveContainer" containerID="c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.148297 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9jd8"] Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.150768 4860 scope.go:117] "RemoveContainer" containerID="67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.151896 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9jd8"] Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.166867 4860 scope.go:117] "RemoveContainer" containerID="afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b" Oct 14 15:03:14 crc kubenswrapper[4860]: E1014 15:03:14.167538 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b\": container with ID starting with afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b not found: ID does not exist" containerID="afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.167711 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b"} err="failed to get container status \"afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b\": rpc error: code = NotFound desc = could not find container \"afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b\": container with ID starting with afac38f747e55ce72da397b1454dbaba8d1d43f57db338db10e70d09f319118b not found: ID does not exist" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.167793 4860 scope.go:117] "RemoveContainer" containerID="c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e" Oct 14 15:03:14 crc kubenswrapper[4860]: E1014 15:03:14.168351 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e\": container with ID starting with c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e not found: ID does not exist" containerID="c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.168423 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e"} err="failed to get container status \"c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e\": rpc error: code = NotFound desc = could not find container \"c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e\": container with ID starting with c6ed572fde404d0ccda2af8ac352ac2930ceedafc8af02f61916a11d9456b46e not found: ID does not exist" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.168482 4860 scope.go:117] "RemoveContainer" containerID="67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c" Oct 14 15:03:14 crc kubenswrapper[4860]: E1014 15:03:14.168844 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c\": container with ID starting with 67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c not found: ID does not exist" containerID="67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.168934 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c"} err="failed to get container status \"67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c\": rpc error: code = NotFound desc = could not find container \"67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c\": container with ID starting with 67356529d63fbcea02720dae94c52b074df57c33e36446c2dca8e9530d513a4c not found: ID does not exist" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.175574 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.175604 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgc2k\" (UniqueName: \"kubernetes.io/projected/2fdc501a-d834-461d-bef2-5c8e49751f2d-kube-api-access-jgc2k\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:14 crc kubenswrapper[4860]: I1014 15:03:14.175614 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdc501a-d834-461d-bef2-5c8e49751f2d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:15 crc kubenswrapper[4860]: I1014 15:03:15.069801 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" path="/var/lib/kubelet/pods/2fdc501a-d834-461d-bef2-5c8e49751f2d/volumes" Oct 14 15:03:19 crc kubenswrapper[4860]: I1014 15:03:19.883090 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c6bxt" Oct 14 15:03:29 crc kubenswrapper[4860]: I1014 15:03:29.246059 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:03:29 crc kubenswrapper[4860]: I1014 15:03:29.246599 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994147 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6"] Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994838 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994848 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994859 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="extract-utilities" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994866 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="extract-utilities" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994876 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="extract-utilities" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994882 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="extract-utilities" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994897 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="extract-utilities" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994902 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="extract-utilities" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994909 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="extract-content" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994915 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="extract-content" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994924 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994929 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994936 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994941 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994949 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="extract-content" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994954 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="extract-content" Oct 14 15:03:32 crc kubenswrapper[4860]: E1014 15:03:31.994964 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="extract-content" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.994969 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="extract-content" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.995075 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78d9369-55a0-45c3-b850-e92917256af4" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.995086 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fdc501a-d834-461d-bef2-5c8e49751f2d" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.995096 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="96856478-85d6-461e-9188-f8bff53f9b03" containerName="registry-server" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:31.995762 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.083519 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.105209 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6"] Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.177977 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mntc\" (UniqueName: \"kubernetes.io/projected/8f866932-2796-4d36-82ea-ffac60aee340-kube-api-access-4mntc\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.178033 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.178089 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.279284 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mntc\" (UniqueName: \"kubernetes.io/projected/8f866932-2796-4d36-82ea-ffac60aee340-kube-api-access-4mntc\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.279365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.279406 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.279970 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.280120 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.298068 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mntc\" (UniqueName: \"kubernetes.io/projected/8f866932-2796-4d36-82ea-ffac60aee340-kube-api-access-4mntc\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.399611 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:32 crc kubenswrapper[4860]: I1014 15:03:32.591370 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6"] Oct 14 15:03:33 crc kubenswrapper[4860]: I1014 15:03:33.227684 4860 generic.go:334] "Generic (PLEG): container finished" podID="8f866932-2796-4d36-82ea-ffac60aee340" containerID="25cc9b2a86c14ddfb98b50c4370ff4447b47d73a2b5e334475b76ff4e4507bcd" exitCode=0 Oct 14 15:03:33 crc kubenswrapper[4860]: I1014 15:03:33.227721 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" event={"ID":"8f866932-2796-4d36-82ea-ffac60aee340","Type":"ContainerDied","Data":"25cc9b2a86c14ddfb98b50c4370ff4447b47d73a2b5e334475b76ff4e4507bcd"} Oct 14 15:03:33 crc kubenswrapper[4860]: I1014 15:03:33.227757 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" event={"ID":"8f866932-2796-4d36-82ea-ffac60aee340","Type":"ContainerStarted","Data":"4e9e457d76c9b65e95bec16fa4bca1fce2bac8c33f061e08330f2530aabb8fc4"} Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.181676 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sr5b4" podUID="b1b285a3-b917-4698-860d-a00c351727f2" containerName="console" containerID="cri-o://ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d" gracePeriod=15 Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.242780 4860 generic.go:334] "Generic (PLEG): container finished" podID="8f866932-2796-4d36-82ea-ffac60aee340" containerID="6405c621c9e56ccf87dd0980afaacbb7d180ceaa4aca9117b93f9b8fd3c35ab2" exitCode=0 Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.242814 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" event={"ID":"8f866932-2796-4d36-82ea-ffac60aee340","Type":"ContainerDied","Data":"6405c621c9e56ccf87dd0980afaacbb7d180ceaa4aca9117b93f9b8fd3c35ab2"} Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.570569 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sr5b4_b1b285a3-b917-4698-860d-a00c351727f2/console/0.log" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.570633 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627414 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-trusted-ca-bundle\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627459 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-oauth-config\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627525 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-console-config\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627543 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-oauth-serving-cert\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627560 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-serving-cert\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627595 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhh6m\" (UniqueName: \"kubernetes.io/projected/b1b285a3-b917-4698-860d-a00c351727f2-kube-api-access-bhh6m\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.627630 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-service-ca\") pod \"b1b285a3-b917-4698-860d-a00c351727f2\" (UID: \"b1b285a3-b917-4698-860d-a00c351727f2\") " Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.628636 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.628648 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-service-ca" (OuterVolumeSpecName: "service-ca") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.628896 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-console-config" (OuterVolumeSpecName: "console-config") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.629603 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.633921 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b285a3-b917-4698-860d-a00c351727f2-kube-api-access-bhh6m" (OuterVolumeSpecName: "kube-api-access-bhh6m") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "kube-api-access-bhh6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.633912 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.634956 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b1b285a3-b917-4698-860d-a00c351727f2" (UID: "b1b285a3-b917-4698-860d-a00c351727f2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.728728 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.728925 4860 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.729014 4860 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.729115 4860 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.729168 4860 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b285a3-b917-4698-860d-a00c351727f2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.729220 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhh6m\" (UniqueName: \"kubernetes.io/projected/b1b285a3-b917-4698-860d-a00c351727f2-kube-api-access-bhh6m\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:35 crc kubenswrapper[4860]: I1014 15:03:35.729294 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b285a3-b917-4698-860d-a00c351727f2-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.249970 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sr5b4_b1b285a3-b917-4698-860d-a00c351727f2/console/0.log" Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.250101 4860 generic.go:334] "Generic (PLEG): container finished" podID="b1b285a3-b917-4698-860d-a00c351727f2" containerID="ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d" exitCode=2 Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.250165 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sr5b4" Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.250216 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sr5b4" event={"ID":"b1b285a3-b917-4698-860d-a00c351727f2","Type":"ContainerDied","Data":"ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d"} Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.250264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sr5b4" event={"ID":"b1b285a3-b917-4698-860d-a00c351727f2","Type":"ContainerDied","Data":"6c8c8489645c8daff9f497f6bc6f1668837929d1480c2d85d467fa2f5b92e280"} Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.250306 4860 scope.go:117] "RemoveContainer" containerID="ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d" Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.254805 4860 generic.go:334] "Generic (PLEG): container finished" podID="8f866932-2796-4d36-82ea-ffac60aee340" containerID="27461416b76cd6550bd0667821a5d3465ffa21b147f2a60afb2e2afc8cbfd4ef" exitCode=0 Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.254863 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" event={"ID":"8f866932-2796-4d36-82ea-ffac60aee340","Type":"ContainerDied","Data":"27461416b76cd6550bd0667821a5d3465ffa21b147f2a60afb2e2afc8cbfd4ef"} Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.283830 4860 scope.go:117] "RemoveContainer" containerID="ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d" Oct 14 15:03:36 crc kubenswrapper[4860]: E1014 15:03:36.284804 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d\": container with ID starting with ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d not found: ID does not exist" containerID="ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d" Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.284857 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d"} err="failed to get container status \"ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d\": rpc error: code = NotFound desc = could not find container \"ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d\": container with ID starting with ce7a84c71cb491e14b7280ce39da18725219aec4d3ccd161366f8897e7b1123d not found: ID does not exist" Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.296634 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sr5b4"] Oct 14 15:03:36 crc kubenswrapper[4860]: I1014 15:03:36.301200 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sr5b4"] Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.069516 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b285a3-b917-4698-860d-a00c351727f2" path="/var/lib/kubelet/pods/b1b285a3-b917-4698-860d-a00c351727f2/volumes" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.463054 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.548474 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mntc\" (UniqueName: \"kubernetes.io/projected/8f866932-2796-4d36-82ea-ffac60aee340-kube-api-access-4mntc\") pod \"8f866932-2796-4d36-82ea-ffac60aee340\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.549171 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-util\") pod \"8f866932-2796-4d36-82ea-ffac60aee340\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.549241 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-bundle\") pod \"8f866932-2796-4d36-82ea-ffac60aee340\" (UID: \"8f866932-2796-4d36-82ea-ffac60aee340\") " Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.550191 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-bundle" (OuterVolumeSpecName: "bundle") pod "8f866932-2796-4d36-82ea-ffac60aee340" (UID: "8f866932-2796-4d36-82ea-ffac60aee340"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.553200 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f866932-2796-4d36-82ea-ffac60aee340-kube-api-access-4mntc" (OuterVolumeSpecName: "kube-api-access-4mntc") pod "8f866932-2796-4d36-82ea-ffac60aee340" (UID: "8f866932-2796-4d36-82ea-ffac60aee340"). InnerVolumeSpecName "kube-api-access-4mntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.566910 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-util" (OuterVolumeSpecName: "util") pod "8f866932-2796-4d36-82ea-ffac60aee340" (UID: "8f866932-2796-4d36-82ea-ffac60aee340"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.650653 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mntc\" (UniqueName: \"kubernetes.io/projected/8f866932-2796-4d36-82ea-ffac60aee340-kube-api-access-4mntc\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.650678 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-util\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:37 crc kubenswrapper[4860]: I1014 15:03:37.650712 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f866932-2796-4d36-82ea-ffac60aee340-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:03:38 crc kubenswrapper[4860]: I1014 15:03:38.268150 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" event={"ID":"8f866932-2796-4d36-82ea-ffac60aee340","Type":"ContainerDied","Data":"4e9e457d76c9b65e95bec16fa4bca1fce2bac8c33f061e08330f2530aabb8fc4"} Oct 14 15:03:38 crc kubenswrapper[4860]: I1014 15:03:38.268486 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9e457d76c9b65e95bec16fa4bca1fce2bac8c33f061e08330f2530aabb8fc4" Oct 14 15:03:38 crc kubenswrapper[4860]: I1014 15:03:38.268202 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.911535 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7"] Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.912065 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="extract" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912079 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="extract" Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.912092 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="pull" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912112 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="pull" Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.912125 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b285a3-b917-4698-860d-a00c351727f2" containerName="console" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912133 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b285a3-b917-4698-860d-a00c351727f2" containerName="console" Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.912182 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="util" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912190 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="util" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912367 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b285a3-b917-4698-860d-a00c351727f2" containerName="console" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912397 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f866932-2796-4d36-82ea-ffac60aee340" containerName="extract" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.912777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:46 crc kubenswrapper[4860]: W1014 15:03:46.915386 4860 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "metallb-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.915437 4860 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:03:46 crc kubenswrapper[4860]: W1014 15:03:46.915477 4860 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.915487 4860 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:03:46 crc kubenswrapper[4860]: W1014 15:03:46.915599 4860 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-qvql9": failed to list *v1.Secret: secrets "manager-account-dockercfg-qvql9" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.915613 4860 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-qvql9\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"manager-account-dockercfg-qvql9\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:03:46 crc kubenswrapper[4860]: W1014 15:03:46.915640 4860 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.915650 4860 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:03:46 crc kubenswrapper[4860]: W1014 15:03:46.919520 4860 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Oct 14 15:03:46 crc kubenswrapper[4860]: E1014 15:03:46.919560 4860 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:03:46 crc kubenswrapper[4860]: I1014 15:03:46.943620 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7"] Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.061196 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnz7j\" (UniqueName: \"kubernetes.io/projected/03cac4a6-319b-4df3-baf8-82868fa438e5-kube-api-access-wnz7j\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.061277 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03cac4a6-319b-4df3-baf8-82868fa438e5-apiservice-cert\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.061338 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03cac4a6-319b-4df3-baf8-82868fa438e5-webhook-cert\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.161949 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnz7j\" (UniqueName: \"kubernetes.io/projected/03cac4a6-319b-4df3-baf8-82868fa438e5-kube-api-access-wnz7j\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.162012 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03cac4a6-319b-4df3-baf8-82868fa438e5-apiservice-cert\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.162062 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03cac4a6-319b-4df3-baf8-82868fa438e5-webhook-cert\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.225684 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh"] Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.226381 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.235801 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.235801 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.235845 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gbprc" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.247415 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh"] Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.364997 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9jd\" (UniqueName: \"kubernetes.io/projected/19ebf47b-7556-421f-bc1a-442040a5995c-kube-api-access-cj9jd\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.365074 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19ebf47b-7556-421f-bc1a-442040a5995c-webhook-cert\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.365107 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19ebf47b-7556-421f-bc1a-442040a5995c-apiservice-cert\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.466300 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19ebf47b-7556-421f-bc1a-442040a5995c-apiservice-cert\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.466460 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9jd\" (UniqueName: \"kubernetes.io/projected/19ebf47b-7556-421f-bc1a-442040a5995c-kube-api-access-cj9jd\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.466528 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19ebf47b-7556-421f-bc1a-442040a5995c-webhook-cert\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.473776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19ebf47b-7556-421f-bc1a-442040a5995c-webhook-cert\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.482251 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19ebf47b-7556-421f-bc1a-442040a5995c-apiservice-cert\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.805184 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qvql9" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.887337 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.909804 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.916886 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03cac4a6-319b-4df3-baf8-82868fa438e5-apiservice-cert\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:47 crc kubenswrapper[4860]: I1014 15:03:47.918048 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03cac4a6-319b-4df3-baf8-82868fa438e5-webhook-cert\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.027425 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.415132 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.423008 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnz7j\" (UniqueName: \"kubernetes.io/projected/03cac4a6-319b-4df3-baf8-82868fa438e5-kube-api-access-wnz7j\") pod \"metallb-operator-controller-manager-8575fd6987-wq9q7\" (UID: \"03cac4a6-319b-4df3-baf8-82868fa438e5\") " pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.427690 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9jd\" (UniqueName: \"kubernetes.io/projected/19ebf47b-7556-421f-bc1a-442040a5995c-kube-api-access-cj9jd\") pod \"metallb-operator-webhook-server-75cf49597f-kjngh\" (UID: \"19ebf47b-7556-421f-bc1a-442040a5995c\") " pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.432228 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.438708 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.823271 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7"] Oct 14 15:03:48 crc kubenswrapper[4860]: I1014 15:03:48.872371 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh"] Oct 14 15:03:48 crc kubenswrapper[4860]: W1014 15:03:48.878149 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19ebf47b_7556_421f_bc1a_442040a5995c.slice/crio-96d36cea3e80dec7c67001524d3697778bd4d3c6d5b4a38c81309a92ea315b71 WatchSource:0}: Error finding container 96d36cea3e80dec7c67001524d3697778bd4d3c6d5b4a38c81309a92ea315b71: Status 404 returned error can't find the container with id 96d36cea3e80dec7c67001524d3697778bd4d3c6d5b4a38c81309a92ea315b71 Oct 14 15:03:49 crc kubenswrapper[4860]: I1014 15:03:49.319668 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" event={"ID":"03cac4a6-319b-4df3-baf8-82868fa438e5","Type":"ContainerStarted","Data":"98275e3780196be83644dabf9230e77b3780a931e28d66e7750e7c20253cce40"} Oct 14 15:03:49 crc kubenswrapper[4860]: I1014 15:03:49.321224 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" event={"ID":"19ebf47b-7556-421f-bc1a-442040a5995c","Type":"ContainerStarted","Data":"96d36cea3e80dec7c67001524d3697778bd4d3c6d5b4a38c81309a92ea315b71"} Oct 14 15:03:55 crc kubenswrapper[4860]: I1014 15:03:55.366060 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" event={"ID":"03cac4a6-319b-4df3-baf8-82868fa438e5","Type":"ContainerStarted","Data":"ba51e4886f1bf099796ac4952f590e64496f085d64726e19f014f6204969735b"} Oct 14 15:03:55 crc kubenswrapper[4860]: I1014 15:03:55.366706 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:03:55 crc kubenswrapper[4860]: I1014 15:03:55.368605 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" event={"ID":"19ebf47b-7556-421f-bc1a-442040a5995c","Type":"ContainerStarted","Data":"564ea26d5afcf0e71b61024f8a7513c0f81d59350f50b1197c586a91f5d9797a"} Oct 14 15:03:55 crc kubenswrapper[4860]: I1014 15:03:55.368836 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:03:55 crc kubenswrapper[4860]: I1014 15:03:55.401316 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" podStartSLOduration=3.469510212 podStartE2EDuration="9.401291313s" podCreationTimestamp="2025-10-14 15:03:46 +0000 UTC" firstStartedPulling="2025-10-14 15:03:48.838360005 +0000 UTC m=+890.425143454" lastFinishedPulling="2025-10-14 15:03:54.770141106 +0000 UTC m=+896.356924555" observedRunningTime="2025-10-14 15:03:55.397704056 +0000 UTC m=+896.984487505" watchObservedRunningTime="2025-10-14 15:03:55.401291313 +0000 UTC m=+896.988074762" Oct 14 15:03:55 crc kubenswrapper[4860]: I1014 15:03:55.435509 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" podStartSLOduration=2.526646829 podStartE2EDuration="8.435489673s" podCreationTimestamp="2025-10-14 15:03:47 +0000 UTC" firstStartedPulling="2025-10-14 15:03:48.880303093 +0000 UTC m=+890.467086542" lastFinishedPulling="2025-10-14 15:03:54.789145937 +0000 UTC m=+896.375929386" observedRunningTime="2025-10-14 15:03:55.433747321 +0000 UTC m=+897.020530770" watchObservedRunningTime="2025-10-14 15:03:55.435489673 +0000 UTC m=+897.022273122" Oct 14 15:03:59 crc kubenswrapper[4860]: I1014 15:03:59.245306 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:03:59 crc kubenswrapper[4860]: I1014 15:03:59.245646 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:04:08 crc kubenswrapper[4860]: I1014 15:04:08.443808 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75cf49597f-kjngh" Oct 14 15:04:28 crc kubenswrapper[4860]: I1014 15:04:28.434709 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8575fd6987-wq9q7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.150894 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7545s"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.153700 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.155493 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.156017 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.157158 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fs8qc" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.164957 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.165633 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.167814 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.204279 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.245724 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.245798 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.245844 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.246404 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35f60ae25f79186f53f554e65dfb897f3e59fbee448cf25d36669e90dcf31a8b"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.246454 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://35f60ae25f79186f53f554e65dfb897f3e59fbee448cf25d36669e90dcf31a8b" gracePeriod=600 Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324099 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-sockets\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324436 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db44a95a-8142-4353-affc-7227a205135c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324472 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tncfc\" (UniqueName: \"kubernetes.io/projected/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-kube-api-access-tncfc\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324489 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8s7c\" (UniqueName: \"kubernetes.io/projected/db44a95a-8142-4353-affc-7227a205135c-kube-api-access-x8s7c\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324505 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics-certs\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324528 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-reloader\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324684 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-startup\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324767 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.324838 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-conf\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.334782 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-np5tb"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.335895 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.342594 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.361619 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tbgl7"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.362441 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.385022 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.385206 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.385323 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.386063 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vvlhj" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.393948 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-np5tb"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.443984 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-sockets\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db44a95a-8142-4353-affc-7227a205135c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444063 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444078 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-metrics-certs\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444097 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76eff526-1e46-4804-a3ef-dfdc845038d7-metrics-certs\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444115 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tncfc\" (UniqueName: \"kubernetes.io/projected/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-kube-api-access-tncfc\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444130 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-metallb-excludel2\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444149 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8s7c\" (UniqueName: \"kubernetes.io/projected/db44a95a-8142-4353-affc-7227a205135c-kube-api-access-x8s7c\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444170 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics-certs\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444188 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtp9\" (UniqueName: \"kubernetes.io/projected/76eff526-1e46-4804-a3ef-dfdc845038d7-kube-api-access-7jtp9\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: E1014 15:04:29.444201 4860 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444208 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-reloader\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444228 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-startup\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: E1014 15:04:29.444257 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db44a95a-8142-4353-affc-7227a205135c-cert podName:db44a95a-8142-4353-affc-7227a205135c nodeName:}" failed. No retries permitted until 2025-10-14 15:04:29.944237788 +0000 UTC m=+931.531021237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db44a95a-8142-4353-affc-7227a205135c-cert") pod "frr-k8s-webhook-server-64bf5d555-zvcbb" (UID: "db44a95a-8142-4353-affc-7227a205135c") : secret "frr-k8s-webhook-server-cert" not found Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444278 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76eff526-1e46-4804-a3ef-dfdc845038d7-cert\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444525 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-conf\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444565 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjd7\" (UniqueName: \"kubernetes.io/projected/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-kube-api-access-xwjd7\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.444911 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-sockets\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.445155 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-startup\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.445367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: E1014 15:04:29.445463 4860 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 14 15:04:29 crc kubenswrapper[4860]: E1014 15:04:29.445492 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics-certs podName:63f2d00d-6dad-48ec-91c9-33ba7f88c5f2 nodeName:}" failed. No retries permitted until 2025-10-14 15:04:29.945483268 +0000 UTC m=+931.532266717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics-certs") pod "frr-k8s-7545s" (UID: "63f2d00d-6dad-48ec-91c9-33ba7f88c5f2") : secret "frr-k8s-certs-secret" not found Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.445604 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-frr-conf\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.445878 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-reloader\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.503162 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tncfc\" (UniqueName: \"kubernetes.io/projected/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-kube-api-access-tncfc\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.508079 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8s7c\" (UniqueName: \"kubernetes.io/projected/db44a95a-8142-4353-affc-7227a205135c-kube-api-access-x8s7c\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548158 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548206 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76eff526-1e46-4804-a3ef-dfdc845038d7-metrics-certs\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548229 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-metrics-certs\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-metallb-excludel2\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jtp9\" (UniqueName: \"kubernetes.io/projected/76eff526-1e46-4804-a3ef-dfdc845038d7-kube-api-access-7jtp9\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548379 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76eff526-1e46-4804-a3ef-dfdc845038d7-cert\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.548441 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjd7\" (UniqueName: \"kubernetes.io/projected/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-kube-api-access-xwjd7\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: E1014 15:04:29.548785 4860 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 15:04:29 crc kubenswrapper[4860]: E1014 15:04:29.548828 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist podName:9ded0ce9-6baf-429a-b3ad-493b2bfda7de nodeName:}" failed. No retries permitted until 2025-10-14 15:04:30.048816186 +0000 UTC m=+931.635599635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist") pod "speaker-tbgl7" (UID: "9ded0ce9-6baf-429a-b3ad-493b2bfda7de") : secret "metallb-memberlist" not found Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.550192 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-metallb-excludel2\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.555757 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76eff526-1e46-4804-a3ef-dfdc845038d7-metrics-certs\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.573652 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.574818 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-metrics-certs\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.577465 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76eff526-1e46-4804-a3ef-dfdc845038d7-cert\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.580734 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jtp9\" (UniqueName: \"kubernetes.io/projected/76eff526-1e46-4804-a3ef-dfdc845038d7-kube-api-access-7jtp9\") pod \"controller-68d546b9d8-np5tb\" (UID: \"76eff526-1e46-4804-a3ef-dfdc845038d7\") " pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.596544 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="35f60ae25f79186f53f554e65dfb897f3e59fbee448cf25d36669e90dcf31a8b" exitCode=0 Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.596585 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"35f60ae25f79186f53f554e65dfb897f3e59fbee448cf25d36669e90dcf31a8b"} Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.596625 4860 scope.go:117] "RemoveContainer" containerID="966bd2ec6b906257cac7c7ee826b7b876455d65da8f5b51b82ca36af7678fd4f" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.608678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjd7\" (UniqueName: \"kubernetes.io/projected/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-kube-api-access-xwjd7\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.651398 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.896510 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-np5tb"] Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.957010 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db44a95a-8142-4353-affc-7227a205135c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.957525 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics-certs\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.963584 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f2d00d-6dad-48ec-91c9-33ba7f88c5f2-metrics-certs\") pod \"frr-k8s-7545s\" (UID: \"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2\") " pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:29 crc kubenswrapper[4860]: I1014 15:04:29.963685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db44a95a-8142-4353-affc-7227a205135c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zvcbb\" (UID: \"db44a95a-8142-4353-affc-7227a205135c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.058891 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:30 crc kubenswrapper[4860]: E1014 15:04:30.059102 4860 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 15:04:30 crc kubenswrapper[4860]: E1014 15:04:30.059158 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist podName:9ded0ce9-6baf-429a-b3ad-493b2bfda7de nodeName:}" failed. No retries permitted until 2025-10-14 15:04:31.059144432 +0000 UTC m=+932.645927881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist") pod "speaker-tbgl7" (UID: "9ded0ce9-6baf-429a-b3ad-493b2bfda7de") : secret "metallb-memberlist" not found Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.076662 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.084634 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.315902 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb"] Oct 14 15:04:30 crc kubenswrapper[4860]: W1014 15:04:30.320405 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb44a95a_8142_4353_affc_7227a205135c.slice/crio-3225cbad9bdd1bacd370029c2eda1519b91d2285a63ad5b9f6869c64fb651533 WatchSource:0}: Error finding container 3225cbad9bdd1bacd370029c2eda1519b91d2285a63ad5b9f6869c64fb651533: Status 404 returned error can't find the container with id 3225cbad9bdd1bacd370029c2eda1519b91d2285a63ad5b9f6869c64fb651533 Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.602249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"3e8236d52072c8ff2729fef9309d5c13371833cdc35eaf627ccc4cbd422e5bf7"} Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.605519 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-np5tb" event={"ID":"76eff526-1e46-4804-a3ef-dfdc845038d7","Type":"ContainerStarted","Data":"d7d8450d8b5c3fe1f4aae4c8f74f1c494f49de6b4e3dc29abe946db7ce482dbf"} Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.605576 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.605588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-np5tb" event={"ID":"76eff526-1e46-4804-a3ef-dfdc845038d7","Type":"ContainerStarted","Data":"2b6804a7f89ffea731273bb72d9042c293e93775eb6e824e90799069ebb02655"} Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.605597 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-np5tb" event={"ID":"76eff526-1e46-4804-a3ef-dfdc845038d7","Type":"ContainerStarted","Data":"1af27d19666ca99b80ed03b6748924fee6ff4f55a5e26fac5c53c961b071f35f"} Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.607480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" event={"ID":"db44a95a-8142-4353-affc-7227a205135c","Type":"ContainerStarted","Data":"3225cbad9bdd1bacd370029c2eda1519b91d2285a63ad5b9f6869c64fb651533"} Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.610504 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"7c40d8caa5e52e82b9243eb5410bd9850080abe3ed1c63b68f1d1d3b4330efe8"} Oct 14 15:04:30 crc kubenswrapper[4860]: I1014 15:04:30.622715 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-np5tb" podStartSLOduration=1.622698247 podStartE2EDuration="1.622698247s" podCreationTimestamp="2025-10-14 15:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:04:30.621695373 +0000 UTC m=+932.208478822" watchObservedRunningTime="2025-10-14 15:04:30.622698247 +0000 UTC m=+932.209481696" Oct 14 15:04:31 crc kubenswrapper[4860]: I1014 15:04:31.070728 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:31 crc kubenswrapper[4860]: I1014 15:04:31.088213 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ded0ce9-6baf-429a-b3ad-493b2bfda7de-memberlist\") pod \"speaker-tbgl7\" (UID: \"9ded0ce9-6baf-429a-b3ad-493b2bfda7de\") " pod="metallb-system/speaker-tbgl7" Oct 14 15:04:31 crc kubenswrapper[4860]: I1014 15:04:31.189788 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tbgl7" Oct 14 15:04:31 crc kubenswrapper[4860]: I1014 15:04:31.621659 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tbgl7" event={"ID":"9ded0ce9-6baf-429a-b3ad-493b2bfda7de","Type":"ContainerStarted","Data":"51d6be19b7ceee8ac94ed738b6a29caac4417591b5178edc5be401dce93c10b3"} Oct 14 15:04:32 crc kubenswrapper[4860]: I1014 15:04:32.629878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tbgl7" event={"ID":"9ded0ce9-6baf-429a-b3ad-493b2bfda7de","Type":"ContainerStarted","Data":"0415b3df4e959d94bc79961c7e688398ed2770bc5330434cd8b1c524cc4815e6"} Oct 14 15:04:32 crc kubenswrapper[4860]: I1014 15:04:32.630221 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tbgl7" event={"ID":"9ded0ce9-6baf-429a-b3ad-493b2bfda7de","Type":"ContainerStarted","Data":"b9b00bc00f7239a9f7df9602a1445545b1e654d8206018fff95ec738083e7b65"} Oct 14 15:04:32 crc kubenswrapper[4860]: I1014 15:04:32.630269 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tbgl7" Oct 14 15:04:32 crc kubenswrapper[4860]: I1014 15:04:32.661009 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tbgl7" podStartSLOduration=3.660986726 podStartE2EDuration="3.660986726s" podCreationTimestamp="2025-10-14 15:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:04:32.656937278 +0000 UTC m=+934.243720727" watchObservedRunningTime="2025-10-14 15:04:32.660986726 +0000 UTC m=+934.247770175" Oct 14 15:04:38 crc kubenswrapper[4860]: I1014 15:04:38.678117 4860 generic.go:334] "Generic (PLEG): container finished" podID="63f2d00d-6dad-48ec-91c9-33ba7f88c5f2" containerID="0611d56e1c55f97c2c2436a6c4d2a2eaf384639e46f248e500b90bb735e7f82b" exitCode=0 Oct 14 15:04:38 crc kubenswrapper[4860]: I1014 15:04:38.678161 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerDied","Data":"0611d56e1c55f97c2c2436a6c4d2a2eaf384639e46f248e500b90bb735e7f82b"} Oct 14 15:04:38 crc kubenswrapper[4860]: I1014 15:04:38.681092 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" event={"ID":"db44a95a-8142-4353-affc-7227a205135c","Type":"ContainerStarted","Data":"096cb8d29823a73f1ec828f0dfa1d3a6ec3eb90a99f5faca957b8af55a4834f9"} Oct 14 15:04:38 crc kubenswrapper[4860]: I1014 15:04:38.681251 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:38 crc kubenswrapper[4860]: I1014 15:04:38.724696 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" podStartSLOduration=2.511357205 podStartE2EDuration="9.724674539s" podCreationTimestamp="2025-10-14 15:04:29 +0000 UTC" firstStartedPulling="2025-10-14 15:04:30.322782739 +0000 UTC m=+931.909566188" lastFinishedPulling="2025-10-14 15:04:37.536100073 +0000 UTC m=+939.122883522" observedRunningTime="2025-10-14 15:04:38.719727928 +0000 UTC m=+940.306511387" watchObservedRunningTime="2025-10-14 15:04:38.724674539 +0000 UTC m=+940.311458008" Oct 14 15:04:39 crc kubenswrapper[4860]: I1014 15:04:39.688380 4860 generic.go:334] "Generic (PLEG): container finished" podID="63f2d00d-6dad-48ec-91c9-33ba7f88c5f2" containerID="959a606cf24aadf5f0059032c23cd5f2eeba80d60857ca8b376e23b4edee4626" exitCode=0 Oct 14 15:04:39 crc kubenswrapper[4860]: I1014 15:04:39.688482 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerDied","Data":"959a606cf24aadf5f0059032c23cd5f2eeba80d60857ca8b376e23b4edee4626"} Oct 14 15:04:41 crc kubenswrapper[4860]: I1014 15:04:41.194435 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tbgl7" Oct 14 15:04:41 crc kubenswrapper[4860]: I1014 15:04:41.707594 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"a1a516bc5fe763957c2731e0eab8bfc98c1922862745fddb517eef4947bfe3b6"} Oct 14 15:04:42 crc kubenswrapper[4860]: I1014 15:04:42.716303 4860 generic.go:334] "Generic (PLEG): container finished" podID="63f2d00d-6dad-48ec-91c9-33ba7f88c5f2" containerID="a1a516bc5fe763957c2731e0eab8bfc98c1922862745fddb517eef4947bfe3b6" exitCode=0 Oct 14 15:04:42 crc kubenswrapper[4860]: I1014 15:04:42.716412 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerDied","Data":"a1a516bc5fe763957c2731e0eab8bfc98c1922862745fddb517eef4947bfe3b6"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.727726 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"3a68b7de7e1b41e5e42666d4928b3cadfbc1c0c887b27edd944bc692f4e85d42"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.727765 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"8fcc85ebcff6da08c7ce2c570f1acef17ed6926901a1444cb87564385e1f10d6"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.727792 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"df98f21ba245a66965da4b2458ea1fdb1b5b08792a7a1170894735c7013d515e"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.727801 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"afd8efd3a7279b15c20b06fecef3a42d2008650ce48403b5fa952b7bac1c399f"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.727810 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"6e849ab522f2714c940663c38c1e9151a3864270d28581999a9a39c597ea2d7c"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.727817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7545s" event={"ID":"63f2d00d-6dad-48ec-91c9-33ba7f88c5f2","Type":"ContainerStarted","Data":"132d954b714204eb55b8a495f7e525824b970f040d8f77f67675920248759cc4"} Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.728263 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:43 crc kubenswrapper[4860]: I1014 15:04:43.754933 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7545s" podStartSLOduration=7.39625535 podStartE2EDuration="14.75491414s" podCreationTimestamp="2025-10-14 15:04:29 +0000 UTC" firstStartedPulling="2025-10-14 15:04:30.190789706 +0000 UTC m=+931.777573155" lastFinishedPulling="2025-10-14 15:04:37.549448496 +0000 UTC m=+939.136231945" observedRunningTime="2025-10-14 15:04:43.748984296 +0000 UTC m=+945.335767775" watchObservedRunningTime="2025-10-14 15:04:43.75491414 +0000 UTC m=+945.341697589" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.494193 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7pdqf"] Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.495200 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.498131 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.498357 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jdrnp" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.498823 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.533247 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7pdqf"] Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.666776 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxbk\" (UniqueName: \"kubernetes.io/projected/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5-kube-api-access-qkxbk\") pod \"openstack-operator-index-7pdqf\" (UID: \"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5\") " pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.768332 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkxbk\" (UniqueName: \"kubernetes.io/projected/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5-kube-api-access-qkxbk\") pod \"openstack-operator-index-7pdqf\" (UID: \"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5\") " pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.785612 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkxbk\" (UniqueName: \"kubernetes.io/projected/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5-kube-api-access-qkxbk\") pod \"openstack-operator-index-7pdqf\" (UID: \"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5\") " pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:44 crc kubenswrapper[4860]: I1014 15:04:44.815380 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:45 crc kubenswrapper[4860]: I1014 15:04:45.022278 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7pdqf"] Oct 14 15:04:45 crc kubenswrapper[4860]: W1014 15:04:45.030229 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e49fc3a_acfb_46bb_ada5_f73f2b6721c5.slice/crio-e36a52e73447c777f55cbd58ec45f5fbc416e9a68315d4fb028cd8b7dd3c96a5 WatchSource:0}: Error finding container e36a52e73447c777f55cbd58ec45f5fbc416e9a68315d4fb028cd8b7dd3c96a5: Status 404 returned error can't find the container with id e36a52e73447c777f55cbd58ec45f5fbc416e9a68315d4fb028cd8b7dd3c96a5 Oct 14 15:04:45 crc kubenswrapper[4860]: I1014 15:04:45.076962 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:45 crc kubenswrapper[4860]: I1014 15:04:45.128833 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7545s" Oct 14 15:04:45 crc kubenswrapper[4860]: I1014 15:04:45.740114 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7pdqf" event={"ID":"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5","Type":"ContainerStarted","Data":"e36a52e73447c777f55cbd58ec45f5fbc416e9a68315d4fb028cd8b7dd3c96a5"} Oct 14 15:04:47 crc kubenswrapper[4860]: I1014 15:04:47.673500 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7pdqf"] Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.273323 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nv67r"] Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.274077 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.287761 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nv67r"] Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.419634 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lq4n\" (UniqueName: \"kubernetes.io/projected/24580674-ab2e-46af-b79a-e2396d8f61a5-kube-api-access-7lq4n\") pod \"openstack-operator-index-nv67r\" (UID: \"24580674-ab2e-46af-b79a-e2396d8f61a5\") " pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.520959 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lq4n\" (UniqueName: \"kubernetes.io/projected/24580674-ab2e-46af-b79a-e2396d8f61a5-kube-api-access-7lq4n\") pod \"openstack-operator-index-nv67r\" (UID: \"24580674-ab2e-46af-b79a-e2396d8f61a5\") " pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.540940 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lq4n\" (UniqueName: \"kubernetes.io/projected/24580674-ab2e-46af-b79a-e2396d8f61a5-kube-api-access-7lq4n\") pod \"openstack-operator-index-nv67r\" (UID: \"24580674-ab2e-46af-b79a-e2396d8f61a5\") " pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.601202 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.760338 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7pdqf" event={"ID":"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5","Type":"ContainerStarted","Data":"88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd"} Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.760800 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-7pdqf" podUID="0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" containerName="registry-server" containerID="cri-o://88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd" gracePeriod=2 Oct 14 15:04:48 crc kubenswrapper[4860]: I1014 15:04:48.788678 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7pdqf" podStartSLOduration=1.323350675 podStartE2EDuration="4.788659596s" podCreationTimestamp="2025-10-14 15:04:44 +0000 UTC" firstStartedPulling="2025-10-14 15:04:45.032044195 +0000 UTC m=+946.618827644" lastFinishedPulling="2025-10-14 15:04:48.497353116 +0000 UTC m=+950.084136565" observedRunningTime="2025-10-14 15:04:48.782359074 +0000 UTC m=+950.369142533" watchObservedRunningTime="2025-10-14 15:04:48.788659596 +0000 UTC m=+950.375443045" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.037871 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nv67r"] Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.087042 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.231814 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkxbk\" (UniqueName: \"kubernetes.io/projected/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5-kube-api-access-qkxbk\") pod \"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5\" (UID: \"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5\") " Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.238257 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5-kube-api-access-qkxbk" (OuterVolumeSpecName: "kube-api-access-qkxbk") pod "0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" (UID: "0e49fc3a-acfb-46bb-ada5-f73f2b6721c5"). InnerVolumeSpecName "kube-api-access-qkxbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.333521 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkxbk\" (UniqueName: \"kubernetes.io/projected/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5-kube-api-access-qkxbk\") on node \"crc\" DevicePath \"\"" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.656217 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-np5tb" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.771237 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nv67r" event={"ID":"24580674-ab2e-46af-b79a-e2396d8f61a5","Type":"ContainerStarted","Data":"d38f7b9a73cac1e96c5b867bfe4aae5a73219dec274141a0f7000dc4e797034c"} Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.771287 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nv67r" event={"ID":"24580674-ab2e-46af-b79a-e2396d8f61a5","Type":"ContainerStarted","Data":"3e163ada75164660c3d59be0f7f823f69e59a5e30d84c2de69063cec99471c65"} Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.775087 4860 generic.go:334] "Generic (PLEG): container finished" podID="0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" containerID="88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd" exitCode=0 Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.775159 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7pdqf" event={"ID":"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5","Type":"ContainerDied","Data":"88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd"} Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.775199 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7pdqf" event={"ID":"0e49fc3a-acfb-46bb-ada5-f73f2b6721c5","Type":"ContainerDied","Data":"e36a52e73447c777f55cbd58ec45f5fbc416e9a68315d4fb028cd8b7dd3c96a5"} Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.775280 4860 scope.go:117] "RemoveContainer" containerID="88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.775291 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7pdqf" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.791711 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nv67r" podStartSLOduration=1.694495571 podStartE2EDuration="1.79168589s" podCreationTimestamp="2025-10-14 15:04:48 +0000 UTC" firstStartedPulling="2025-10-14 15:04:49.048441581 +0000 UTC m=+950.635225030" lastFinishedPulling="2025-10-14 15:04:49.14563191 +0000 UTC m=+950.732415349" observedRunningTime="2025-10-14 15:04:49.789850735 +0000 UTC m=+951.376634204" watchObservedRunningTime="2025-10-14 15:04:49.79168589 +0000 UTC m=+951.378469339" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.799470 4860 scope.go:117] "RemoveContainer" containerID="88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd" Oct 14 15:04:49 crc kubenswrapper[4860]: E1014 15:04:49.799788 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd\": container with ID starting with 88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd not found: ID does not exist" containerID="88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.799831 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd"} err="failed to get container status \"88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd\": rpc error: code = NotFound desc = could not find container \"88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd\": container with ID starting with 88b6b683cdcc34941127c40fcdc146f6fc7c96e9c3f3ace36108f2c9592a68cd not found: ID does not exist" Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.822722 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7pdqf"] Oct 14 15:04:49 crc kubenswrapper[4860]: I1014 15:04:49.823848 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-7pdqf"] Oct 14 15:04:50 crc kubenswrapper[4860]: I1014 15:04:50.087976 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zvcbb" Oct 14 15:04:51 crc kubenswrapper[4860]: I1014 15:04:51.068100 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" path="/var/lib/kubelet/pods/0e49fc3a-acfb-46bb-ada5-f73f2b6721c5/volumes" Oct 14 15:04:58 crc kubenswrapper[4860]: I1014 15:04:58.601893 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:58 crc kubenswrapper[4860]: I1014 15:04:58.602668 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:58 crc kubenswrapper[4860]: I1014 15:04:58.637782 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:04:58 crc kubenswrapper[4860]: I1014 15:04:58.854618 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nv67r" Oct 14 15:05:00 crc kubenswrapper[4860]: I1014 15:05:00.081671 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7545s" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.402495 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb"] Oct 14 15:05:06 crc kubenswrapper[4860]: E1014 15:05:06.403124 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" containerName="registry-server" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.403145 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" containerName="registry-server" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.403306 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e49fc3a-acfb-46bb-ada5-f73f2b6721c5" containerName="registry-server" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.404293 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.406510 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mf2x5" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.416454 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb"] Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.571741 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-bundle\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.571847 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-util\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.571899 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85z9\" (UniqueName: \"kubernetes.io/projected/940211d1-5595-4283-b049-57cc681b2ffc-kube-api-access-m85z9\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.673523 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85z9\" (UniqueName: \"kubernetes.io/projected/940211d1-5595-4283-b049-57cc681b2ffc-kube-api-access-m85z9\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.673591 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-bundle\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.673643 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-util\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.674127 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-util\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.674197 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-bundle\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.692971 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85z9\" (UniqueName: \"kubernetes.io/projected/940211d1-5595-4283-b049-57cc681b2ffc-kube-api-access-m85z9\") pod \"c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:06 crc kubenswrapper[4860]: I1014 15:05:06.730224 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:07 crc kubenswrapper[4860]: I1014 15:05:07.115517 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb"] Oct 14 15:05:07 crc kubenswrapper[4860]: I1014 15:05:07.882540 4860 generic.go:334] "Generic (PLEG): container finished" podID="940211d1-5595-4283-b049-57cc681b2ffc" containerID="6e21851b643eb00d57a4b5c817921db2cec74561a13d5016b54baf59b50152c6" exitCode=0 Oct 14 15:05:07 crc kubenswrapper[4860]: I1014 15:05:07.882833 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" event={"ID":"940211d1-5595-4283-b049-57cc681b2ffc","Type":"ContainerDied","Data":"6e21851b643eb00d57a4b5c817921db2cec74561a13d5016b54baf59b50152c6"} Oct 14 15:05:07 crc kubenswrapper[4860]: I1014 15:05:07.882983 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" event={"ID":"940211d1-5595-4283-b049-57cc681b2ffc","Type":"ContainerStarted","Data":"d662937496a287e4330983111fbd9a0738c2d2736b534f94cee1244afed9f025"} Oct 14 15:05:08 crc kubenswrapper[4860]: I1014 15:05:08.893581 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" event={"ID":"940211d1-5595-4283-b049-57cc681b2ffc","Type":"ContainerStarted","Data":"7b7fc4e683cc73c3a41b63c2dffd42e2c57056fb937f0a8c7d9573c4ad1d50d0"} Oct 14 15:05:09 crc kubenswrapper[4860]: I1014 15:05:09.907790 4860 generic.go:334] "Generic (PLEG): container finished" podID="940211d1-5595-4283-b049-57cc681b2ffc" containerID="7b7fc4e683cc73c3a41b63c2dffd42e2c57056fb937f0a8c7d9573c4ad1d50d0" exitCode=0 Oct 14 15:05:09 crc kubenswrapper[4860]: I1014 15:05:09.907878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" event={"ID":"940211d1-5595-4283-b049-57cc681b2ffc","Type":"ContainerDied","Data":"7b7fc4e683cc73c3a41b63c2dffd42e2c57056fb937f0a8c7d9573c4ad1d50d0"} Oct 14 15:05:10 crc kubenswrapper[4860]: I1014 15:05:10.915088 4860 generic.go:334] "Generic (PLEG): container finished" podID="940211d1-5595-4283-b049-57cc681b2ffc" containerID="e29fb0758ce003106942e453d9b5015b8446a3e0f9569ec6f86a6634187f465c" exitCode=0 Oct 14 15:05:10 crc kubenswrapper[4860]: I1014 15:05:10.915183 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" event={"ID":"940211d1-5595-4283-b049-57cc681b2ffc","Type":"ContainerDied","Data":"e29fb0758ce003106942e453d9b5015b8446a3e0f9569ec6f86a6634187f465c"} Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.220158 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.344976 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-util\") pod \"940211d1-5595-4283-b049-57cc681b2ffc\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.345160 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-bundle\") pod \"940211d1-5595-4283-b049-57cc681b2ffc\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.345208 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85z9\" (UniqueName: \"kubernetes.io/projected/940211d1-5595-4283-b049-57cc681b2ffc-kube-api-access-m85z9\") pod \"940211d1-5595-4283-b049-57cc681b2ffc\" (UID: \"940211d1-5595-4283-b049-57cc681b2ffc\") " Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.346197 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-bundle" (OuterVolumeSpecName: "bundle") pod "940211d1-5595-4283-b049-57cc681b2ffc" (UID: "940211d1-5595-4283-b049-57cc681b2ffc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.353428 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940211d1-5595-4283-b049-57cc681b2ffc-kube-api-access-m85z9" (OuterVolumeSpecName: "kube-api-access-m85z9") pod "940211d1-5595-4283-b049-57cc681b2ffc" (UID: "940211d1-5595-4283-b049-57cc681b2ffc"). InnerVolumeSpecName "kube-api-access-m85z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.360664 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-util" (OuterVolumeSpecName: "util") pod "940211d1-5595-4283-b049-57cc681b2ffc" (UID: "940211d1-5595-4283-b049-57cc681b2ffc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.447581 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.447826 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85z9\" (UniqueName: \"kubernetes.io/projected/940211d1-5595-4283-b049-57cc681b2ffc-kube-api-access-m85z9\") on node \"crc\" DevicePath \"\"" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.447923 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/940211d1-5595-4283-b049-57cc681b2ffc-util\") on node \"crc\" DevicePath \"\"" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.933341 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" event={"ID":"940211d1-5595-4283-b049-57cc681b2ffc","Type":"ContainerDied","Data":"d662937496a287e4330983111fbd9a0738c2d2736b534f94cee1244afed9f025"} Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.933382 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d662937496a287e4330983111fbd9a0738c2d2736b534f94cee1244afed9f025" Oct 14 15:05:12 crc kubenswrapper[4860]: I1014 15:05:12.933396 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.773990 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs"] Oct 14 15:05:19 crc kubenswrapper[4860]: E1014 15:05:19.774902 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="util" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.774915 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="util" Oct 14 15:05:19 crc kubenswrapper[4860]: E1014 15:05:19.774942 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="extract" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.774949 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="extract" Oct 14 15:05:19 crc kubenswrapper[4860]: E1014 15:05:19.774958 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="pull" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.774964 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="pull" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.775064 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="940211d1-5595-4283-b049-57cc681b2ffc" containerName="extract" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.775636 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.780748 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-7ks4b" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.810412 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs"] Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.844744 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnz9p\" (UniqueName: \"kubernetes.io/projected/88da4870-694b-46ba-9fda-5e85357bcb5e-kube-api-access-vnz9p\") pod \"openstack-operator-controller-operator-5bc7d8f4c-hjwzs\" (UID: \"88da4870-694b-46ba-9fda-5e85357bcb5e\") " pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.945577 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnz9p\" (UniqueName: \"kubernetes.io/projected/88da4870-694b-46ba-9fda-5e85357bcb5e-kube-api-access-vnz9p\") pod \"openstack-operator-controller-operator-5bc7d8f4c-hjwzs\" (UID: \"88da4870-694b-46ba-9fda-5e85357bcb5e\") " pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:05:19 crc kubenswrapper[4860]: I1014 15:05:19.991967 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnz9p\" (UniqueName: \"kubernetes.io/projected/88da4870-694b-46ba-9fda-5e85357bcb5e-kube-api-access-vnz9p\") pod \"openstack-operator-controller-operator-5bc7d8f4c-hjwzs\" (UID: \"88da4870-694b-46ba-9fda-5e85357bcb5e\") " pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:05:20 crc kubenswrapper[4860]: I1014 15:05:20.111813 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:05:20 crc kubenswrapper[4860]: I1014 15:05:20.347973 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs"] Oct 14 15:05:21 crc kubenswrapper[4860]: I1014 15:05:21.004077 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" event={"ID":"88da4870-694b-46ba-9fda-5e85357bcb5e","Type":"ContainerStarted","Data":"303ee057b383c9f2da4414af51345b2e17132143eae48d18b1966bac854ebe59"} Oct 14 15:05:26 crc kubenswrapper[4860]: I1014 15:05:26.054261 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" event={"ID":"88da4870-694b-46ba-9fda-5e85357bcb5e","Type":"ContainerStarted","Data":"200b30164ee87f381e8b60bbf460d090ee1220773ce3a111d926c5f7348889fe"} Oct 14 15:05:29 crc kubenswrapper[4860]: I1014 15:05:29.073044 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" event={"ID":"88da4870-694b-46ba-9fda-5e85357bcb5e","Type":"ContainerStarted","Data":"96f285c61fe7f87a5a01e3400f8d49bdea6977e5d3e3fd86ee03e49b983ae4dd"} Oct 14 15:05:29 crc kubenswrapper[4860]: I1014 15:05:29.073394 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:05:29 crc kubenswrapper[4860]: I1014 15:05:29.127636 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" podStartSLOduration=2.085409544 podStartE2EDuration="10.12761799s" podCreationTimestamp="2025-10-14 15:05:19 +0000 UTC" firstStartedPulling="2025-10-14 15:05:20.378617354 +0000 UTC m=+981.965400803" lastFinishedPulling="2025-10-14 15:05:28.4208258 +0000 UTC m=+990.007609249" observedRunningTime="2025-10-14 15:05:29.122455245 +0000 UTC m=+990.709238694" watchObservedRunningTime="2025-10-14 15:05:29.12761799 +0000 UTC m=+990.714401429" Oct 14 15:05:30 crc kubenswrapper[4860]: I1014 15:05:30.082995 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5bc7d8f4c-hjwzs" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.811273 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.812966 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.815114 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dq48w" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.826308 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.826522 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswhb\" (UniqueName: \"kubernetes.io/projected/95d281e4-c140-42c3-ba4e-3d36e98bb29c-kube-api-access-xswhb\") pod \"barbican-operator-controller-manager-64f84fcdbb-lwpwz\" (UID: \"95d281e4-c140-42c3-ba4e-3d36e98bb29c\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.827333 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.832229 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2tjxp" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.839700 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.847998 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.848924 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.850072 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g66gp" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.864045 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.869416 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.870556 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.875343 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mgr6j" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.889482 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.890687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.913713 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-spw2s" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.917670 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.927477 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswhb\" (UniqueName: \"kubernetes.io/projected/95d281e4-c140-42c3-ba4e-3d36e98bb29c-kube-api-access-xswhb\") pod \"barbican-operator-controller-manager-64f84fcdbb-lwpwz\" (UID: \"95d281e4-c140-42c3-ba4e-3d36e98bb29c\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.946402 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.947431 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.951700 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.951950 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.951992 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n9czk" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.953161 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.958766 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dkcft" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.959996 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.964201 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.973856 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.979865 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.983099 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.984210 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.988910 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-f2fch" Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.997175 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x"] Oct 14 15:06:04 crc kubenswrapper[4860]: I1014 15:06:04.998575 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswhb\" (UniqueName: \"kubernetes.io/projected/95d281e4-c140-42c3-ba4e-3d36e98bb29c-kube-api-access-xswhb\") pod \"barbican-operator-controller-manager-64f84fcdbb-lwpwz\" (UID: \"95d281e4-c140-42c3-ba4e-3d36e98bb29c\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.010744 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.011944 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.019572 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.020731 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.022449 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m87pg" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.027315 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-s89fq" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.028731 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hw7\" (UniqueName: \"kubernetes.io/projected/8680f35c-eae8-49e0-a670-d4b467a987f0-kube-api-access-46hw7\") pod \"cinder-operator-controller-manager-59cdc64769-6lzwd\" (UID: \"8680f35c-eae8-49e0-a670-d4b467a987f0\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.028772 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjgb\" (UniqueName: \"kubernetes.io/projected/6b31fe2f-695e-4b8b-b632-7075e4a9740f-kube-api-access-pvjgb\") pod \"designate-operator-controller-manager-687df44cdb-f2rfg\" (UID: \"6b31fe2f-695e-4b8b-b632-7075e4a9740f\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.028796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqrn\" (UniqueName: \"kubernetes.io/projected/65912b78-7ceb-4bd0-ab72-70fd3574b786-kube-api-access-4tqrn\") pod \"glance-operator-controller-manager-7bb46cd7d-dgxfp\" (UID: \"65912b78-7ceb-4bd0-ab72-70fd3574b786\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.028817 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h8ds\" (UniqueName: \"kubernetes.io/projected/e24ba4ef-9297-4d61-a338-941ce00a2391-kube-api-access-2h8ds\") pod \"heat-operator-controller-manager-6d9967f8dd-8mngx\" (UID: \"e24ba4ef-9297-4d61-a338-941ce00a2391\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.051414 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.086517 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.111746 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.112714 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.117608 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-hmrxh" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.126695 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.127923 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.129970 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh5r8\" (UniqueName: \"kubernetes.io/projected/4bbd7b36-79fe-423b-a5c6-2237390dea3f-kube-api-access-gh5r8\") pod \"keystone-operator-controller-manager-ddb98f99b-8ht4q\" (UID: \"4bbd7b36-79fe-423b-a5c6-2237390dea3f\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130147 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46hw7\" (UniqueName: \"kubernetes.io/projected/8680f35c-eae8-49e0-a670-d4b467a987f0-kube-api-access-46hw7\") pod \"cinder-operator-controller-manager-59cdc64769-6lzwd\" (UID: \"8680f35c-eae8-49e0-a670-d4b467a987f0\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130350 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-644g2\" (UniqueName: \"kubernetes.io/projected/1b0c2826-792e-44ca-9bc1-830aefee72d6-kube-api-access-644g2\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130422 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wk8\" (UniqueName: \"kubernetes.io/projected/95d178d8-e3b2-4141-91af-b82fa61bd86a-kube-api-access-h5wk8\") pod \"horizon-operator-controller-manager-6d74794d9b-mdd5z\" (UID: \"95d178d8-e3b2-4141-91af-b82fa61bd86a\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130498 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvjgb\" (UniqueName: \"kubernetes.io/projected/6b31fe2f-695e-4b8b-b632-7075e4a9740f-kube-api-access-pvjgb\") pod \"designate-operator-controller-manager-687df44cdb-f2rfg\" (UID: \"6b31fe2f-695e-4b8b-b632-7075e4a9740f\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130577 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqrn\" (UniqueName: \"kubernetes.io/projected/65912b78-7ceb-4bd0-ab72-70fd3574b786-kube-api-access-4tqrn\") pod \"glance-operator-controller-manager-7bb46cd7d-dgxfp\" (UID: \"65912b78-7ceb-4bd0-ab72-70fd3574b786\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130787 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfns4\" (UniqueName: \"kubernetes.io/projected/786a4f8b-062c-46b7-8028-5079481427db-kube-api-access-qfns4\") pod \"manila-operator-controller-manager-59578bc799-bc4x8\" (UID: \"786a4f8b-062c-46b7-8028-5079481427db\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.130945 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h8ds\" (UniqueName: \"kubernetes.io/projected/e24ba4ef-9297-4d61-a338-941ce00a2391-kube-api-access-2h8ds\") pod \"heat-operator-controller-manager-6d9967f8dd-8mngx\" (UID: \"e24ba4ef-9297-4d61-a338-941ce00a2391\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.143798 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6ppzv" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.157183 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.171487 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.173525 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.178995 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqrn\" (UniqueName: \"kubernetes.io/projected/65912b78-7ceb-4bd0-ab72-70fd3574b786-kube-api-access-4tqrn\") pod \"glance-operator-controller-manager-7bb46cd7d-dgxfp\" (UID: \"65912b78-7ceb-4bd0-ab72-70fd3574b786\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.187973 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.192413 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8f85\" (UniqueName: \"kubernetes.io/projected/e3456832-68ce-443e-825f-9d6af6cf829f-kube-api-access-p8f85\") pod \"ironic-operator-controller-manager-74cb5cbc49-l9w8x\" (UID: \"e3456832-68ce-443e-825f-9d6af6cf829f\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.193747 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lh5lm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.194057 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.243813 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h8ds\" (UniqueName: \"kubernetes.io/projected/e24ba4ef-9297-4d61-a338-941ce00a2391-kube-api-access-2h8ds\") pod \"heat-operator-controller-manager-6d9967f8dd-8mngx\" (UID: \"e24ba4ef-9297-4d61-a338-941ce00a2391\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.249438 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46hw7\" (UniqueName: \"kubernetes.io/projected/8680f35c-eae8-49e0-a670-d4b467a987f0-kube-api-access-46hw7\") pod \"cinder-operator-controller-manager-59cdc64769-6lzwd\" (UID: \"8680f35c-eae8-49e0-a670-d4b467a987f0\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.264171 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvjgb\" (UniqueName: \"kubernetes.io/projected/6b31fe2f-695e-4b8b-b632-7075e4a9740f-kube-api-access-pvjgb\") pod \"designate-operator-controller-manager-687df44cdb-f2rfg\" (UID: \"6b31fe2f-695e-4b8b-b632-7075e4a9740f\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.277264 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294786 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8cb\" (UniqueName: \"kubernetes.io/projected/5397040a-47ac-487d-8e5a-8fd02d6ec654-kube-api-access-pz8cb\") pod \"mariadb-operator-controller-manager-5777b4f897-2rpj7\" (UID: \"5397040a-47ac-487d-8e5a-8fd02d6ec654\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294834 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-644g2\" (UniqueName: \"kubernetes.io/projected/1b0c2826-792e-44ca-9bc1-830aefee72d6-kube-api-access-644g2\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294857 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wk8\" (UniqueName: \"kubernetes.io/projected/95d178d8-e3b2-4141-91af-b82fa61bd86a-kube-api-access-h5wk8\") pod \"horizon-operator-controller-manager-6d74794d9b-mdd5z\" (UID: \"95d178d8-e3b2-4141-91af-b82fa61bd86a\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294875 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjdm\" (UniqueName: \"kubernetes.io/projected/1f864c3d-2e54-459b-b613-3785d0cf4ae6-kube-api-access-lcjdm\") pod \"nova-operator-controller-manager-57bb74c7bf-tw4ph\" (UID: \"1f864c3d-2e54-459b-b613-3785d0cf4ae6\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294905 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jl4\" (UniqueName: \"kubernetes.io/projected/1e6c58c7-4e05-4c8d-98f0-2063b1ba613f-kube-api-access-k7jl4\") pod \"neutron-operator-controller-manager-797d478b46-lfntw\" (UID: \"1e6c58c7-4e05-4c8d-98f0-2063b1ba613f\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294925 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfns4\" (UniqueName: \"kubernetes.io/projected/786a4f8b-062c-46b7-8028-5079481427db-kube-api-access-qfns4\") pod \"manila-operator-controller-manager-59578bc799-bc4x8\" (UID: \"786a4f8b-062c-46b7-8028-5079481427db\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294950 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8f85\" (UniqueName: \"kubernetes.io/projected/e3456832-68ce-443e-825f-9d6af6cf829f-kube-api-access-p8f85\") pod \"ironic-operator-controller-manager-74cb5cbc49-l9w8x\" (UID: \"e3456832-68ce-443e-825f-9d6af6cf829f\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.294983 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh5r8\" (UniqueName: \"kubernetes.io/projected/4bbd7b36-79fe-423b-a5c6-2237390dea3f-kube-api-access-gh5r8\") pod \"keystone-operator-controller-manager-ddb98f99b-8ht4q\" (UID: \"4bbd7b36-79fe-423b-a5c6-2237390dea3f\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.295000 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:05 crc kubenswrapper[4860]: E1014 15:06:05.295141 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 14 15:06:05 crc kubenswrapper[4860]: E1014 15:06:05.295197 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert podName:1b0c2826-792e-44ca-9bc1-830aefee72d6 nodeName:}" failed. No retries permitted until 2025-10-14 15:06:05.795176678 +0000 UTC m=+1027.381960117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert") pod "infra-operator-controller-manager-585fc5b659-hpkm4" (UID: "1b0c2826-792e-44ca-9bc1-830aefee72d6") : secret "infra-operator-webhook-server-cert" not found Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.301399 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.308197 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.309696 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.319211 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.324839 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.325003 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.329651 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh5r8\" (UniqueName: \"kubernetes.io/projected/4bbd7b36-79fe-423b-a5c6-2237390dea3f-kube-api-access-gh5r8\") pod \"keystone-operator-controller-manager-ddb98f99b-8ht4q\" (UID: \"4bbd7b36-79fe-423b-a5c6-2237390dea3f\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.333370 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sqrj8" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.333570 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.333668 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.333683 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tklnx" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.334968 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.338655 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ml522" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.341671 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.350048 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfns4\" (UniqueName: \"kubernetes.io/projected/786a4f8b-062c-46b7-8028-5079481427db-kube-api-access-qfns4\") pod \"manila-operator-controller-manager-59578bc799-bc4x8\" (UID: \"786a4f8b-062c-46b7-8028-5079481427db\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.358353 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wk8\" (UniqueName: \"kubernetes.io/projected/95d178d8-e3b2-4141-91af-b82fa61bd86a-kube-api-access-h5wk8\") pod \"horizon-operator-controller-manager-6d74794d9b-mdd5z\" (UID: \"95d178d8-e3b2-4141-91af-b82fa61bd86a\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.358429 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.359882 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.379923 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.380148 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.382118 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.387411 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8f85\" (UniqueName: \"kubernetes.io/projected/e3456832-68ce-443e-825f-9d6af6cf829f-kube-api-access-p8f85\") pod \"ironic-operator-controller-manager-74cb5cbc49-l9w8x\" (UID: \"e3456832-68ce-443e-825f-9d6af6cf829f\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.392818 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6m9sb" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398167 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkp5t\" (UniqueName: \"kubernetes.io/projected/3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5-kube-api-access-tkp5t\") pod \"ovn-operator-controller-manager-869cc7797f-4kql9\" (UID: \"3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398259 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv9h5\" (UniqueName: \"kubernetes.io/projected/d0ac64a4-cdc5-4362-9359-712291fafbdf-kube-api-access-cv9h5\") pod \"placement-operator-controller-manager-664664cb68-l6rbl\" (UID: \"d0ac64a4-cdc5-4362-9359-712291fafbdf\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398283 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8cb\" (UniqueName: \"kubernetes.io/projected/5397040a-47ac-487d-8e5a-8fd02d6ec654-kube-api-access-pz8cb\") pod \"mariadb-operator-controller-manager-5777b4f897-2rpj7\" (UID: \"5397040a-47ac-487d-8e5a-8fd02d6ec654\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjglf\" (UniqueName: \"kubernetes.io/projected/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-kube-api-access-sjglf\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398334 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcjdm\" (UniqueName: \"kubernetes.io/projected/1f864c3d-2e54-459b-b613-3785d0cf4ae6-kube-api-access-lcjdm\") pod \"nova-operator-controller-manager-57bb74c7bf-tw4ph\" (UID: \"1f864c3d-2e54-459b-b613-3785d0cf4ae6\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398353 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398373 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkhs\" (UniqueName: \"kubernetes.io/projected/ad189aa9-4e21-4d7e-b1de-83497bd83376-kube-api-access-bxkhs\") pod \"octavia-operator-controller-manager-6d7c7ddf95-5l2qq\" (UID: \"ad189aa9-4e21-4d7e-b1de-83497bd83376\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398409 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jl4\" (UniqueName: \"kubernetes.io/projected/1e6c58c7-4e05-4c8d-98f0-2063b1ba613f-kube-api-access-k7jl4\") pod \"neutron-operator-controller-manager-797d478b46-lfntw\" (UID: \"1e6c58c7-4e05-4c8d-98f0-2063b1ba613f\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.398725 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-644g2\" (UniqueName: \"kubernetes.io/projected/1b0c2826-792e-44ca-9bc1-830aefee72d6-kube-api-access-644g2\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.403724 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.406118 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.406544 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.410385 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-p7rs2" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.416211 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.422014 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-l5wl4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.424587 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jl4\" (UniqueName: \"kubernetes.io/projected/1e6c58c7-4e05-4c8d-98f0-2063b1ba613f-kube-api-access-k7jl4\") pod \"neutron-operator-controller-manager-797d478b46-lfntw\" (UID: \"1e6c58c7-4e05-4c8d-98f0-2063b1ba613f\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.425783 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.430656 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.445464 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.455497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8cb\" (UniqueName: \"kubernetes.io/projected/5397040a-47ac-487d-8e5a-8fd02d6ec654-kube-api-access-pz8cb\") pod \"mariadb-operator-controller-manager-5777b4f897-2rpj7\" (UID: \"5397040a-47ac-487d-8e5a-8fd02d6ec654\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.469702 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.470827 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.477188 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.487056 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcjdm\" (UniqueName: \"kubernetes.io/projected/1f864c3d-2e54-459b-b613-3785d0cf4ae6-kube-api-access-lcjdm\") pod \"nova-operator-controller-manager-57bb74c7bf-tw4ph\" (UID: \"1f864c3d-2e54-459b-b613-3785d0cf4ae6\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500715 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkp5t\" (UniqueName: \"kubernetes.io/projected/3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5-kube-api-access-tkp5t\") pod \"ovn-operator-controller-manager-869cc7797f-4kql9\" (UID: \"3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500807 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv9h5\" (UniqueName: \"kubernetes.io/projected/d0ac64a4-cdc5-4362-9359-712291fafbdf-kube-api-access-cv9h5\") pod \"placement-operator-controller-manager-664664cb68-l6rbl\" (UID: \"d0ac64a4-cdc5-4362-9359-712291fafbdf\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500834 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjglf\" (UniqueName: \"kubernetes.io/projected/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-kube-api-access-sjglf\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500861 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdl2\" (UniqueName: \"kubernetes.io/projected/4450d3fe-e520-48c6-ac1d-25344bdedc5e-kube-api-access-fhdl2\") pod \"swift-operator-controller-manager-5f4d5dfdc6-rpjh4\" (UID: \"4450d3fe-e520-48c6-ac1d-25344bdedc5e\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500884 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500901 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwhq\" (UniqueName: \"kubernetes.io/projected/572e90ee-e3d4-44a0-b3c5-d0005f4cb41c-kube-api-access-bmwhq\") pod \"telemetry-operator-controller-manager-578874c84d-xpq8w\" (UID: \"572e90ee-e3d4-44a0-b3c5-d0005f4cb41c\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.500922 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxkhs\" (UniqueName: \"kubernetes.io/projected/ad189aa9-4e21-4d7e-b1de-83497bd83376-kube-api-access-bxkhs\") pod \"octavia-operator-controller-manager-6d7c7ddf95-5l2qq\" (UID: \"ad189aa9-4e21-4d7e-b1de-83497bd83376\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:05 crc kubenswrapper[4860]: E1014 15:06:05.502693 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 15:06:05 crc kubenswrapper[4860]: E1014 15:06:05.521004 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert podName:df4d54ec-6345-4b47-8ae8-58ae0bf6da7f nodeName:}" failed. No retries permitted until 2025-10-14 15:06:06.020971528 +0000 UTC m=+1027.607754977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" (UID: "df4d54ec-6345-4b47-8ae8-58ae0bf6da7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.508501 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.539968 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkp5t\" (UniqueName: \"kubernetes.io/projected/3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5-kube-api-access-tkp5t\") pod \"ovn-operator-controller-manager-869cc7797f-4kql9\" (UID: \"3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.558310 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.558711 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv9h5\" (UniqueName: \"kubernetes.io/projected/d0ac64a4-cdc5-4362-9359-712291fafbdf-kube-api-access-cv9h5\") pod \"placement-operator-controller-manager-664664cb68-l6rbl\" (UID: \"d0ac64a4-cdc5-4362-9359-712291fafbdf\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.565297 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.575694 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjglf\" (UniqueName: \"kubernetes.io/projected/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-kube-api-access-sjglf\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.592801 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xmxtm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.593014 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.593325 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.614859 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.615844 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.623293 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-b2brh" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.623329 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.624267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdl2\" (UniqueName: \"kubernetes.io/projected/4450d3fe-e520-48c6-ac1d-25344bdedc5e-kube-api-access-fhdl2\") pod \"swift-operator-controller-manager-5f4d5dfdc6-rpjh4\" (UID: \"4450d3fe-e520-48c6-ac1d-25344bdedc5e\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.624297 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwhq\" (UniqueName: \"kubernetes.io/projected/572e90ee-e3d4-44a0-b3c5-d0005f4cb41c-kube-api-access-bmwhq\") pod \"telemetry-operator-controller-manager-578874c84d-xpq8w\" (UID: \"572e90ee-e3d4-44a0-b3c5-d0005f4cb41c\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.624329 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqbp\" (UniqueName: \"kubernetes.io/projected/f9603eeb-cc1b-4dc8-82e6-9cf64109c774-kube-api-access-lzqbp\") pod \"test-operator-controller-manager-ffcdd6c94-9m7mm\" (UID: \"f9603eeb-cc1b-4dc8-82e6-9cf64109c774\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.624364 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvvxb\" (UniqueName: \"kubernetes.io/projected/9d1ea96c-cdba-4586-ae97-c008ff1ed05e-kube-api-access-fvvxb\") pod \"watcher-operator-controller-manager-646675d848-lzb7d\" (UID: \"9d1ea96c-cdba-4586-ae97-c008ff1ed05e\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.624399 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxkhs\" (UniqueName: \"kubernetes.io/projected/ad189aa9-4e21-4d7e-b1de-83497bd83376-kube-api-access-bxkhs\") pod \"octavia-operator-controller-manager-6d7c7ddf95-5l2qq\" (UID: \"ad189aa9-4e21-4d7e-b1de-83497bd83376\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.624889 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.647673 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d"] Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.662970 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.669713 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwhq\" (UniqueName: \"kubernetes.io/projected/572e90ee-e3d4-44a0-b3c5-d0005f4cb41c-kube-api-access-bmwhq\") pod \"telemetry-operator-controller-manager-578874c84d-xpq8w\" (UID: \"572e90ee-e3d4-44a0-b3c5-d0005f4cb41c\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.672090 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdl2\" (UniqueName: \"kubernetes.io/projected/4450d3fe-e520-48c6-ac1d-25344bdedc5e-kube-api-access-fhdl2\") pod \"swift-operator-controller-manager-5f4d5dfdc6-rpjh4\" (UID: \"4450d3fe-e520-48c6-ac1d-25344bdedc5e\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.680492 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.725228 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqbp\" (UniqueName: \"kubernetes.io/projected/f9603eeb-cc1b-4dc8-82e6-9cf64109c774-kube-api-access-lzqbp\") pod \"test-operator-controller-manager-ffcdd6c94-9m7mm\" (UID: \"f9603eeb-cc1b-4dc8-82e6-9cf64109c774\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.725281 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvvxb\" (UniqueName: \"kubernetes.io/projected/9d1ea96c-cdba-4586-ae97-c008ff1ed05e-kube-api-access-fvvxb\") pod \"watcher-operator-controller-manager-646675d848-lzb7d\" (UID: \"9d1ea96c-cdba-4586-ae97-c008ff1ed05e\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.755984 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqbp\" (UniqueName: \"kubernetes.io/projected/f9603eeb-cc1b-4dc8-82e6-9cf64109c774-kube-api-access-lzqbp\") pod \"test-operator-controller-manager-ffcdd6c94-9m7mm\" (UID: \"f9603eeb-cc1b-4dc8-82e6-9cf64109c774\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.760560 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvvxb\" (UniqueName: \"kubernetes.io/projected/9d1ea96c-cdba-4586-ae97-c008ff1ed05e-kube-api-access-fvvxb\") pod \"watcher-operator-controller-manager-646675d848-lzb7d\" (UID: \"9d1ea96c-cdba-4586-ae97-c008ff1ed05e\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.786352 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.817279 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.826202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:05 crc kubenswrapper[4860]: E1014 15:06:05.826564 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 14 15:06:05 crc kubenswrapper[4860]: E1014 15:06:05.826618 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert podName:1b0c2826-792e-44ca-9bc1-830aefee72d6 nodeName:}" failed. No retries permitted until 2025-10-14 15:06:06.826600168 +0000 UTC m=+1028.413383617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert") pod "infra-operator-controller-manager-585fc5b659-hpkm4" (UID: "1b0c2826-792e-44ca-9bc1-830aefee72d6") : secret "infra-operator-webhook-server-cert" not found Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.857301 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.895744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.924103 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:05 crc kubenswrapper[4860]: I1014 15:06:05.964184 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.035986 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:06 crc kubenswrapper[4860]: E1014 15:06:06.036228 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 15:06:06 crc kubenswrapper[4860]: E1014 15:06:06.036285 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert podName:df4d54ec-6345-4b47-8ae8-58ae0bf6da7f nodeName:}" failed. No retries permitted until 2025-10-14 15:06:07.036270027 +0000 UTC m=+1028.623053476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" (UID: "df4d54ec-6345-4b47-8ae8-58ae0bf6da7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.036993 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn"] Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.081858 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.086972 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn"] Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.090911 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.092863 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5l7cg" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.129442 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm"] Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.143138 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.151671 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5nhp8" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.184469 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm"] Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.246777 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlb9\" (UniqueName: \"kubernetes.io/projected/c584f96e-f636-458e-9aca-f953ccf4a900-kube-api-access-nqlb9\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.247047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxc5p\" (UniqueName: \"kubernetes.io/projected/0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b-kube-api-access-bxc5p\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-gthfm\" (UID: \"0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.247161 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c584f96e-f636-458e-9aca-f953ccf4a900-cert\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.347608 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxc5p\" (UniqueName: \"kubernetes.io/projected/0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b-kube-api-access-bxc5p\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-gthfm\" (UID: \"0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.347705 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c584f96e-f636-458e-9aca-f953ccf4a900-cert\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.347767 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlb9\" (UniqueName: \"kubernetes.io/projected/c584f96e-f636-458e-9aca-f953ccf4a900-kube-api-access-nqlb9\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: E1014 15:06:06.348417 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 14 15:06:06 crc kubenswrapper[4860]: E1014 15:06:06.348468 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c584f96e-f636-458e-9aca-f953ccf4a900-cert podName:c584f96e-f636-458e-9aca-f953ccf4a900 nodeName:}" failed. No retries permitted until 2025-10-14 15:06:06.848449846 +0000 UTC m=+1028.435233295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c584f96e-f636-458e-9aca-f953ccf4a900-cert") pod "openstack-operator-controller-manager-768555b76-hzfmn" (UID: "c584f96e-f636-458e-9aca-f953ccf4a900") : secret "webhook-server-cert" not found Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.379601 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxc5p\" (UniqueName: \"kubernetes.io/projected/0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b-kube-api-access-bxc5p\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-gthfm\" (UID: \"0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.382782 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlb9\" (UniqueName: \"kubernetes.io/projected/c584f96e-f636-458e-9aca-f953ccf4a900-kube-api-access-nqlb9\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.643308 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.659302 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz"] Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.864918 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q"] Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.874406 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.874502 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c584f96e-f636-458e-9aca-f953ccf4a900-cert\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.881279 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b0c2826-792e-44ca-9bc1-830aefee72d6-cert\") pod \"infra-operator-controller-manager-585fc5b659-hpkm4\" (UID: \"1b0c2826-792e-44ca-9bc1-830aefee72d6\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.882707 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c584f96e-f636-458e-9aca-f953ccf4a900-cert\") pod \"openstack-operator-controller-manager-768555b76-hzfmn\" (UID: \"c584f96e-f636-458e-9aca-f953ccf4a900\") " pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:06 crc kubenswrapper[4860]: I1014 15:06:06.952950 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd"] Oct 14 15:06:06 crc kubenswrapper[4860]: W1014 15:06:06.972514 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8680f35c_eae8_49e0_a670_d4b467a987f0.slice/crio-12f9fee407af65522a5ef430d1d31547d8b4305769d5997a537b7381ab16d8ad WatchSource:0}: Error finding container 12f9fee407af65522a5ef430d1d31547d8b4305769d5997a537b7381ab16d8ad: Status 404 returned error can't find the container with id 12f9fee407af65522a5ef430d1d31547d8b4305769d5997a537b7381ab16d8ad Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:06.999133 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.071059 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.082266 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.091970 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4d54ec-6345-4b47-8ae8-58ae0bf6da7f-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt\" (UID: \"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.097708 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7"] Oct 14 15:06:07 crc kubenswrapper[4860]: W1014 15:06:07.108918 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5397040a_47ac_487d_8e5a_8fd02d6ec654.slice/crio-285942da0c45f5e92ab95367d14ae9ec660c7a3c461a5c60c49fcadc0cfa8905 WatchSource:0}: Error finding container 285942da0c45f5e92ab95367d14ae9ec660c7a3c461a5c60c49fcadc0cfa8905: Status 404 returned error can't find the container with id 285942da0c45f5e92ab95367d14ae9ec660c7a3c461a5c60c49fcadc0cfa8905 Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.110170 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.152215 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.160605 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.166373 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.172155 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.186803 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg"] Oct 14 15:06:07 crc kubenswrapper[4860]: W1014 15:06:07.194684 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d178d8_e3b2_4141_91af_b82fa61bd86a.slice/crio-43d11f1b3379e7e4dd823bcf31d3fc752436015acee9139ab6b717472d0b1f68 WatchSource:0}: Error finding container 43d11f1b3379e7e4dd823bcf31d3fc752436015acee9139ab6b717472d0b1f68: Status 404 returned error can't find the container with id 43d11f1b3379e7e4dd823bcf31d3fc752436015acee9139ab6b717472d0b1f68 Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.221433 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.373453 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" event={"ID":"4bbd7b36-79fe-423b-a5c6-2237390dea3f","Type":"ContainerStarted","Data":"a43082352d19680d76def20cf8a99388902e966ef8504e1d8944c26b29f4faf1"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.375209 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" event={"ID":"8680f35c-eae8-49e0-a670-d4b467a987f0","Type":"ContainerStarted","Data":"12f9fee407af65522a5ef430d1d31547d8b4305769d5997a537b7381ab16d8ad"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.376609 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" event={"ID":"65912b78-7ceb-4bd0-ab72-70fd3574b786","Type":"ContainerStarted","Data":"4a665a796be291b5d8a9827804950052c2f3773da6f29289d9ef902114762d3d"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.377408 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" event={"ID":"1e6c58c7-4e05-4c8d-98f0-2063b1ba613f","Type":"ContainerStarted","Data":"73e2a9aa8cd4657a28d5e4dce5d236a00ed892dbc66aaab141e9243040676d18"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.378105 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" event={"ID":"786a4f8b-062c-46b7-8028-5079481427db","Type":"ContainerStarted","Data":"1a772aceb71046bfd335634476d95257f696b5cdf7715e023f3a452f034177cc"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.378796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" event={"ID":"95d178d8-e3b2-4141-91af-b82fa61bd86a","Type":"ContainerStarted","Data":"43d11f1b3379e7e4dd823bcf31d3fc752436015acee9139ab6b717472d0b1f68"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.380155 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" event={"ID":"e24ba4ef-9297-4d61-a338-941ce00a2391","Type":"ContainerStarted","Data":"0528b2dbd1c6c399e53990e56b85f66a7f01b5b220dda5840cb79e7ef458f6d4"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.380856 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" event={"ID":"6b31fe2f-695e-4b8b-b632-7075e4a9740f","Type":"ContainerStarted","Data":"eb2f824eef12afcd9b5de619ac823895b8000a0f555f698ad4962e3c2263b16e"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.381618 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" event={"ID":"5397040a-47ac-487d-8e5a-8fd02d6ec654","Type":"ContainerStarted","Data":"285942da0c45f5e92ab95367d14ae9ec660c7a3c461a5c60c49fcadc0cfa8905"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.382541 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" event={"ID":"95d281e4-c140-42c3-ba4e-3d36e98bb29c","Type":"ContainerStarted","Data":"756214c51643c73a2875ac5e324772106356c40c9d7a8db6493dd0466de81924"} Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.522497 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm"] Oct 14 15:06:07 crc kubenswrapper[4860]: W1014 15:06:07.523306 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9603eeb_cc1b_4dc8_82e6_9cf64109c774.slice/crio-30d8fca24fa6772739f2411b387ac9586d3bdd7da538f147bce77b3e85a85a7f WatchSource:0}: Error finding container 30d8fca24fa6772739f2411b387ac9586d3bdd7da538f147bce77b3e85a85a7f: Status 404 returned error can't find the container with id 30d8fca24fa6772739f2411b387ac9586d3bdd7da538f147bce77b3e85a85a7f Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.553167 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.559196 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.573443 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.579525 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.629853 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.652694 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.673790 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq"] Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.685868 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4"] Oct 14 15:06:07 crc kubenswrapper[4860]: W1014 15:06:07.700544 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1ea96c_cdba_4586_ae97_c008ff1ed05e.slice/crio-4f2da54c7b027668b995772a04dfe4c25cd01223c615e28cbaa9be07d779a357 WatchSource:0}: Error finding container 4f2da54c7b027668b995772a04dfe4c25cd01223c615e28cbaa9be07d779a357: Status 404 returned error can't find the container with id 4f2da54c7b027668b995772a04dfe4c25cd01223c615e28cbaa9be07d779a357 Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.713608 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl"] Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.720483 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxc5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-gthfm_openstack-operators(0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.722643 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" podUID="0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b" Oct 14 15:06:07 crc kubenswrapper[4860]: W1014 15:06:07.731202 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d202f65_a2f2_4200_b3ea_e7a78ca5d5a5.slice/crio-8dc475b2d7cb2b5fea0a07e0509ae3b5993c7950979bbbe7f7a378319eadcd74 WatchSource:0}: Error finding container 8dc475b2d7cb2b5fea0a07e0509ae3b5993c7950979bbbe7f7a378319eadcd74: Status 404 returned error can't find the container with id 8dc475b2d7cb2b5fea0a07e0509ae3b5993c7950979bbbe7f7a378319eadcd74 Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.734941 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhdl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f4d5dfdc6-rpjh4_openstack-operators(4450d3fe-e520-48c6-ac1d-25344bdedc5e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.748812 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4"] Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.750972 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkp5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-869cc7797f-4kql9_openstack-operators(3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.760296 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn"] Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.778859 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxkhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6d7c7ddf95-5l2qq_openstack-operators(ad189aa9-4e21-4d7e-b1de-83497bd83376): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.779005 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cv9h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-664664cb68-l6rbl_openstack-operators(d0ac64a4-cdc5-4362-9359-712291fafbdf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.780101 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-644g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-585fc5b659-hpkm4_openstack-operators(1b0c2826-792e-44ca-9bc1-830aefee72d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 15:06:07 crc kubenswrapper[4860]: E1014 15:06:07.946784 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" podUID="4450d3fe-e520-48c6-ac1d-25344bdedc5e" Oct 14 15:06:07 crc kubenswrapper[4860]: I1014 15:06:07.985673 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt"] Oct 14 15:06:08 crc kubenswrapper[4860]: W1014 15:06:08.009514 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf4d54ec_6345_4b47_8ae8_58ae0bf6da7f.slice/crio-e49ac4ea5af430750221aae14c131ee71f90ef670c8df712c51843fc5ab342e3 WatchSource:0}: Error finding container e49ac4ea5af430750221aae14c131ee71f90ef670c8df712c51843fc5ab342e3: Status 404 returned error can't find the container with id e49ac4ea5af430750221aae14c131ee71f90ef670c8df712c51843fc5ab342e3 Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.077363 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" podUID="d0ac64a4-cdc5-4362-9359-712291fafbdf" Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.110568 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" podUID="3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5" Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.361308 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podUID="1b0c2826-792e-44ca-9bc1-830aefee72d6" Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.396815 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" podUID="ad189aa9-4e21-4d7e-b1de-83497bd83376" Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.402069 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" event={"ID":"1f864c3d-2e54-459b-b613-3785d0cf4ae6","Type":"ContainerStarted","Data":"471bb7df2a542d5f07213a96b37e2738a0e3e67be391bf6ffe11b386b70a71ba"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.409726 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" event={"ID":"e3456832-68ce-443e-825f-9d6af6cf829f","Type":"ContainerStarted","Data":"a062593fdded602aa6d75621b2e2e4afab21695de4b5a62d178fe934e3dc1ea7"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.422809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" event={"ID":"4450d3fe-e520-48c6-ac1d-25344bdedc5e","Type":"ContainerStarted","Data":"bc0d43df3cce29214058e3d3b104bb70e11a69e19635d53ff6014141c071f93b"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.422858 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" event={"ID":"4450d3fe-e520-48c6-ac1d-25344bdedc5e","Type":"ContainerStarted","Data":"21c204f344d542d72f64d4b3620cf7722dc6e5fff5187f27f77f923c57050a64"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.427826 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" event={"ID":"ad189aa9-4e21-4d7e-b1de-83497bd83376","Type":"ContainerStarted","Data":"b8b7b6c6e2e3276b0b9a3f5b879f174316f17983ec52c1b040340afccb0d48b2"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.430484 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" event={"ID":"9d1ea96c-cdba-4586-ae97-c008ff1ed05e","Type":"ContainerStarted","Data":"4f2da54c7b027668b995772a04dfe4c25cd01223c615e28cbaa9be07d779a357"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.438495 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" event={"ID":"3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5","Type":"ContainerStarted","Data":"9ec2f68d30bc2785653a410a22011dd18ad2c2b301efe1c18a612072dfd08a3f"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.438564 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" event={"ID":"3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5","Type":"ContainerStarted","Data":"8dc475b2d7cb2b5fea0a07e0509ae3b5993c7950979bbbe7f7a378319eadcd74"} Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.455363 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" podUID="3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5" Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.455384 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" podUID="4450d3fe-e520-48c6-ac1d-25344bdedc5e" Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.455484 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" podUID="ad189aa9-4e21-4d7e-b1de-83497bd83376" Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.465379 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" event={"ID":"f9603eeb-cc1b-4dc8-82e6-9cf64109c774","Type":"ContainerStarted","Data":"30d8fca24fa6772739f2411b387ac9586d3bdd7da538f147bce77b3e85a85a7f"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.474290 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" event={"ID":"0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b","Type":"ContainerStarted","Data":"837cdbcf6ac6fa13df89674d8e54217c41af2c71f170869bea6849520ca3c83c"} Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.475808 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" podUID="0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b" Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.479714 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" event={"ID":"572e90ee-e3d4-44a0-b3c5-d0005f4cb41c","Type":"ContainerStarted","Data":"1b020805c88eb30b3826ce739f0d89e30c90d67b5133c274efe2521a634490c0"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.486497 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" event={"ID":"1b0c2826-792e-44ca-9bc1-830aefee72d6","Type":"ContainerStarted","Data":"ffef8b7f2e65a6d93043d2170b690673dac67e8316c836915d4e86537bb78919"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.486551 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" event={"ID":"1b0c2826-792e-44ca-9bc1-830aefee72d6","Type":"ContainerStarted","Data":"c7e7db3c2627b5fb40004260cce35a37965316fdc718419d046fa1c15d15ed9c"} Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.488830 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podUID="1b0c2826-792e-44ca-9bc1-830aefee72d6" Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.491289 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" event={"ID":"d0ac64a4-cdc5-4362-9359-712291fafbdf","Type":"ContainerStarted","Data":"a1cd2e7a0a749187f4fba07dfb45c6336e79ad7b3713cd5271387f4456a50eda"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.491339 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" event={"ID":"d0ac64a4-cdc5-4362-9359-712291fafbdf","Type":"ContainerStarted","Data":"1d8d49bbb0603fc7f5007b004161408ceb4647bc2c40712b6eb390f23cac5914"} Oct 14 15:06:08 crc kubenswrapper[4860]: E1014 15:06:08.493250 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff\\\"\"" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" podUID="d0ac64a4-cdc5-4362-9359-712291fafbdf" Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.494398 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" event={"ID":"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f","Type":"ContainerStarted","Data":"e49ac4ea5af430750221aae14c131ee71f90ef670c8df712c51843fc5ab342e3"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.507260 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" event={"ID":"c584f96e-f636-458e-9aca-f953ccf4a900","Type":"ContainerStarted","Data":"c4069a5950960260bbf43265c776654bd2a0717e500fbfb7277ffbd48d54f36e"} Oct 14 15:06:08 crc kubenswrapper[4860]: I1014 15:06:08.507343 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" event={"ID":"c584f96e-f636-458e-9aca-f953ccf4a900","Type":"ContainerStarted","Data":"a9aff80b74392a6ca7ff112e858795ba53954b176d5179e8395129db25f2c933"} Oct 14 15:06:09 crc kubenswrapper[4860]: I1014 15:06:09.538935 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" event={"ID":"ad189aa9-4e21-4d7e-b1de-83497bd83376","Type":"ContainerStarted","Data":"4df634101093ec6f9a0c234239056fda7acc8ee2cbbb9b5a772e3145cbe4a5f8"} Oct 14 15:06:09 crc kubenswrapper[4860]: E1014 15:06:09.551241 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" podUID="ad189aa9-4e21-4d7e-b1de-83497bd83376" Oct 14 15:06:09 crc kubenswrapper[4860]: I1014 15:06:09.579448 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" event={"ID":"c584f96e-f636-458e-9aca-f953ccf4a900","Type":"ContainerStarted","Data":"2538805e496d7d953ccd6de45417d41984eee625bc948f2a4b459001f96770db"} Oct 14 15:06:09 crc kubenswrapper[4860]: I1014 15:06:09.579511 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:09 crc kubenswrapper[4860]: E1014 15:06:09.587999 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" podUID="4450d3fe-e520-48c6-ac1d-25344bdedc5e" Oct 14 15:06:09 crc kubenswrapper[4860]: E1014 15:06:09.626313 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff\\\"\"" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" podUID="d0ac64a4-cdc5-4362-9359-712291fafbdf" Oct 14 15:06:09 crc kubenswrapper[4860]: E1014 15:06:09.626921 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" podUID="0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b" Oct 14 15:06:09 crc kubenswrapper[4860]: E1014 15:06:09.627076 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podUID="1b0c2826-792e-44ca-9bc1-830aefee72d6" Oct 14 15:06:09 crc kubenswrapper[4860]: E1014 15:06:09.628166 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" podUID="3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5" Oct 14 15:06:09 crc kubenswrapper[4860]: I1014 15:06:09.845564 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" podStartSLOduration=4.845504902 podStartE2EDuration="4.845504902s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:06:09.837628202 +0000 UTC m=+1031.424411651" watchObservedRunningTime="2025-10-14 15:06:09.845504902 +0000 UTC m=+1031.432288351" Oct 14 15:06:10 crc kubenswrapper[4860]: E1014 15:06:10.592396 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" podUID="ad189aa9-4e21-4d7e-b1de-83497bd83376" Oct 14 15:06:17 crc kubenswrapper[4860]: I1014 15:06:17.116958 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-768555b76-hzfmn" Oct 14 15:06:21 crc kubenswrapper[4860]: E1014 15:06:21.723823 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:73736f216f886549901fbcfc823b072f73691c9a79ec79e59d100e992b9c1e34" Oct 14 15:06:21 crc kubenswrapper[4860]: E1014 15:06:21.724630 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:73736f216f886549901fbcfc823b072f73691c9a79ec79e59d100e992b9c1e34,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvjgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-687df44cdb-f2rfg_openstack-operators(6b31fe2f-695e-4b8b-b632-7075e4a9740f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:22 crc kubenswrapper[4860]: E1014 15:06:22.190091 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:783f711b4cb179819cfcb81167c3591c70671440f4551bbe48b7a8730567f577" Oct 14 15:06:22 crc kubenswrapper[4860]: E1014 15:06:22.190362 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:783f711b4cb179819cfcb81167c3591c70671440f4551bbe48b7a8730567f577,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xswhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-64f84fcdbb-lwpwz_openstack-operators(95d281e4-c140-42c3-ba4e-3d36e98bb29c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:23 crc kubenswrapper[4860]: E1014 15:06:23.529524 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca" Oct 14 15:06:23 crc kubenswrapper[4860]: E1014 15:06:23.529954 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvvxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-646675d848-lzb7d_openstack-operators(9d1ea96c-cdba-4586-ae97-c008ff1ed05e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:24 crc kubenswrapper[4860]: E1014 15:06:24.042525 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0" Oct 14 15:06:24 crc kubenswrapper[4860]: E1014 15:06:24.042742 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2h8ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-6d9967f8dd-8mngx_openstack-operators(e24ba4ef-9297-4d61-a338-941ce00a2391): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:25 crc kubenswrapper[4860]: E1014 15:06:25.763536 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a" Oct 14 15:06:25 crc kubenswrapper[4860]: E1014 15:06:25.763789 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzqbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-ffcdd6c94-9m7mm_openstack-operators(f9603eeb-cc1b-4dc8-82e6-9cf64109c774): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:26 crc kubenswrapper[4860]: E1014 15:06:26.239506 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0" Oct 14 15:06:26 crc kubenswrapper[4860]: E1014 15:06:26.240068 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bmwhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-578874c84d-xpq8w_openstack-operators(572e90ee-e3d4-44a0-b3c5-d0005f4cb41c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:27 crc kubenswrapper[4860]: E1014 15:06:27.519017 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167" Oct 14 15:06:27 crc kubenswrapper[4860]: E1014 15:06:27.519225 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pz8cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-5777b4f897-2rpj7_openstack-operators(5397040a-47ac-487d-8e5a-8fd02d6ec654): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:06:29 crc kubenswrapper[4860]: I1014 15:06:29.246165 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:06:29 crc kubenswrapper[4860]: I1014 15:06:29.246611 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.241946 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" podUID="6b31fe2f-695e-4b8b-b632-7075e4a9740f" Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.252570 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" podUID="95d281e4-c140-42c3-ba4e-3d36e98bb29c" Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.410205 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" podUID="9d1ea96c-cdba-4586-ae97-c008ff1ed05e" Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.492485 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" podUID="572e90ee-e3d4-44a0-b3c5-d0005f4cb41c" Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.581018 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" podUID="e24ba4ef-9297-4d61-a338-941ce00a2391" Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.667428 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" podUID="f9603eeb-cc1b-4dc8-82e6-9cf64109c774" Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.788529 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" event={"ID":"572e90ee-e3d4-44a0-b3c5-d0005f4cb41c","Type":"ContainerStarted","Data":"2ccd5442ca73b0e874583a249d91a5b041a92a0d15588037cfecfb8410cfba16"} Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.793530 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" podUID="572e90ee-e3d4-44a0-b3c5-d0005f4cb41c" Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.802892 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" event={"ID":"4bbd7b36-79fe-423b-a5c6-2237390dea3f","Type":"ContainerStarted","Data":"cca921ce390309109deacd9c750b1ee59813da455e3dd195659d1e0c8ac3fbf5"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.809913 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" event={"ID":"6b31fe2f-695e-4b8b-b632-7075e4a9740f","Type":"ContainerStarted","Data":"59e9bf750bf1a5a2c8b0db2e847773b4d4a9500238196bd9046cb7e27d911fb2"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.822058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" event={"ID":"8680f35c-eae8-49e0-a670-d4b467a987f0","Type":"ContainerStarted","Data":"74928b6a3b18c7b474da19110db49adfbdd6b20cd9575f6c56d8fe2d244d6af6"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.827926 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" event={"ID":"9d1ea96c-cdba-4586-ae97-c008ff1ed05e","Type":"ContainerStarted","Data":"b587348034921c5d7cceae3a204b2fda677c882e7e009287d16608d1b381c0df"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.855700 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" event={"ID":"1e6c58c7-4e05-4c8d-98f0-2063b1ba613f","Type":"ContainerStarted","Data":"26b0cd617a0e09f774882a1a089af3fcb4fbeabd8d566eb8421d7965ca85081d"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.876355 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" event={"ID":"f9603eeb-cc1b-4dc8-82e6-9cf64109c774","Type":"ContainerStarted","Data":"6710c3ef2f9e9a297a963d90c69104062d6e045afd6c9fec21edc230090c5096"} Oct 14 15:06:34 crc kubenswrapper[4860]: E1014 15:06:34.879964 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" podUID="f9603eeb-cc1b-4dc8-82e6-9cf64109c774" Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.890360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" event={"ID":"1f864c3d-2e54-459b-b613-3785d0cf4ae6","Type":"ContainerStarted","Data":"643ae167cfb358ff767c3174efac9e5419f54d514c50a42cc9f4fc815159a061"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.908665 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" event={"ID":"65912b78-7ceb-4bd0-ab72-70fd3574b786","Type":"ContainerStarted","Data":"d8938c8e2e9a06a698c53365f76e63c357303c4744448ab67a0bfea897484b26"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.921429 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" event={"ID":"e24ba4ef-9297-4d61-a338-941ce00a2391","Type":"ContainerStarted","Data":"3618fb281849fdc2841c25fd1c78465cff8640dac13c1a800ebc9ca1ab6aa62c"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.940865 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" event={"ID":"95d178d8-e3b2-4141-91af-b82fa61bd86a","Type":"ContainerStarted","Data":"91ead880c5619797bea7d06deccccfdef148c36f9e1bd9b475b53edcc61cb51f"} Oct 14 15:06:34 crc kubenswrapper[4860]: I1014 15:06:34.957281 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" event={"ID":"95d281e4-c140-42c3-ba4e-3d36e98bb29c","Type":"ContainerStarted","Data":"2c8a1bce627b1365b0ec61ddd9043e67a859c2a6935514d7fb11e6e209b40f3f"} Oct 14 15:06:35 crc kubenswrapper[4860]: E1014 15:06:35.383605 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" podUID="5397040a-47ac-487d-8e5a-8fd02d6ec654" Oct 14 15:06:35 crc kubenswrapper[4860]: I1014 15:06:35.985633 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" event={"ID":"1b0c2826-792e-44ca-9bc1-830aefee72d6","Type":"ContainerStarted","Data":"6fc21e571d616ea27a10bc536d8476cc4da54602f4fb0494e88eb0cebece01e1"} Oct 14 15:06:35 crc kubenswrapper[4860]: I1014 15:06:35.986853 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:35 crc kubenswrapper[4860]: I1014 15:06:35.993336 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" event={"ID":"786a4f8b-062c-46b7-8028-5079481427db","Type":"ContainerStarted","Data":"f0b332fd5a55f7c4679517862385016ce83ccc7ad370cbe34c1e826d70558f07"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.005437 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" event={"ID":"1e6c58c7-4e05-4c8d-98f0-2063b1ba613f","Type":"ContainerStarted","Data":"9e2626009c327263e37ffcdbf4c9a3d51da7b2b957d5ef5a69c849fb796a2d3d"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.006253 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.008652 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" event={"ID":"0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b","Type":"ContainerStarted","Data":"16e28e5798af2be8f1135cc4672bcd01cf38b663dd6b91e9449119e05fcc5b46"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.011331 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" event={"ID":"1f864c3d-2e54-459b-b613-3785d0cf4ae6","Type":"ContainerStarted","Data":"fd6c69e7e7a4cf066bd2ccb21cc10a55028ec52c6c2a0e05c3d473bb64e5241b"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.011624 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.019408 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" event={"ID":"4450d3fe-e520-48c6-ac1d-25344bdedc5e","Type":"ContainerStarted","Data":"c812447dd119f9c17a473fd5a621b036246cf6e6e1405f1441ae9140a6ad3787"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.020080 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.028965 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" event={"ID":"5397040a-47ac-487d-8e5a-8fd02d6ec654","Type":"ContainerStarted","Data":"ebe5b21c3bde821571c4da5017e1ea34c6c044828ef12eb0aec47ae234d52cc3"} Oct 14 15:06:36 crc kubenswrapper[4860]: E1014 15:06:36.030182 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" podUID="5397040a-47ac-487d-8e5a-8fd02d6ec654" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.066256 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" event={"ID":"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f","Type":"ContainerStarted","Data":"43d266ed4505c006d9c6570cbc9beecec2bb7949aa5a20599fb2666fb2f22102"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.091231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" event={"ID":"3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5","Type":"ContainerStarted","Data":"4522d468855a94926fbe74acca172e8c568468b81fcbc73ea05f3227057c558b"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.091463 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.093825 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" event={"ID":"d0ac64a4-cdc5-4362-9359-712291fafbdf","Type":"ContainerStarted","Data":"355a21968a173288a6ea3dec002eb6595e7be0eabec171e940893a9323f4ec58"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.093977 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.115253 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" event={"ID":"8680f35c-eae8-49e0-a670-d4b467a987f0","Type":"ContainerStarted","Data":"87ffa9e1afc44e1f907b6cb2709d2d2675c978726769cd4d1bac87a69384f549"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.115970 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.118112 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" event={"ID":"e3456832-68ce-443e-825f-9d6af6cf829f","Type":"ContainerStarted","Data":"27c087f7d8c8deb7779cad82ebd0a7f537a1874704a814226c2904fd52fe1f38"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.118702 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.120823 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" event={"ID":"4bbd7b36-79fe-423b-a5c6-2237390dea3f","Type":"ContainerStarted","Data":"08a4a17b931cffcb436df999b857852bb23a0c60255fc03eb0db27200038fd34"} Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.121186 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.146706 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podStartSLOduration=5.843930584 podStartE2EDuration="32.146689128s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.779955839 +0000 UTC m=+1029.366739288" lastFinishedPulling="2025-10-14 15:06:34.082714383 +0000 UTC m=+1055.669497832" observedRunningTime="2025-10-14 15:06:36.136613895 +0000 UTC m=+1057.723397344" watchObservedRunningTime="2025-10-14 15:06:36.146689128 +0000 UTC m=+1057.733472577" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.167793 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" event={"ID":"ad189aa9-4e21-4d7e-b1de-83497bd83376","Type":"ContainerStarted","Data":"0d6c294efbab7654793866c2104041339a98e7ca9b976f440ddc5eabe2dc195b"} Oct 14 15:06:36 crc kubenswrapper[4860]: E1014 15:06:36.175260 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" podUID="572e90ee-e3d4-44a0-b3c5-d0005f4cb41c" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.307881 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" podStartSLOduration=9.012340134 podStartE2EDuration="31.307861435s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.005202786 +0000 UTC m=+1028.591986235" lastFinishedPulling="2025-10-14 15:06:29.300724087 +0000 UTC m=+1050.887507536" observedRunningTime="2025-10-14 15:06:36.284850649 +0000 UTC m=+1057.871634098" watchObservedRunningTime="2025-10-14 15:06:36.307861435 +0000 UTC m=+1057.894644884" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.310479 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-gthfm" podStartSLOduration=3.936988835 podStartE2EDuration="30.310470298s" podCreationTimestamp="2025-10-14 15:06:06 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.720301247 +0000 UTC m=+1029.307084696" lastFinishedPulling="2025-10-14 15:06:34.09378271 +0000 UTC m=+1055.680566159" observedRunningTime="2025-10-14 15:06:36.246409649 +0000 UTC m=+1057.833193098" watchObservedRunningTime="2025-10-14 15:06:36.310470298 +0000 UTC m=+1057.897253747" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.415057 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" podStartSLOduration=5.071409504 podStartE2EDuration="31.415022306s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.734723255 +0000 UTC m=+1029.321506704" lastFinishedPulling="2025-10-14 15:06:34.078336057 +0000 UTC m=+1055.665119506" observedRunningTime="2025-10-14 15:06:36.352825552 +0000 UTC m=+1057.939609001" watchObservedRunningTime="2025-10-14 15:06:36.415022306 +0000 UTC m=+1058.001805745" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.418439 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" podStartSLOduration=10.407048567 podStartE2EDuration="32.418427068s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:06.880173452 +0000 UTC m=+1028.466956901" lastFinishedPulling="2025-10-14 15:06:28.891551953 +0000 UTC m=+1050.478335402" observedRunningTime="2025-10-14 15:06:36.412792972 +0000 UTC m=+1057.999576421" watchObservedRunningTime="2025-10-14 15:06:36.418427068 +0000 UTC m=+1058.005210527" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.520181 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" podStartSLOduration=5.227853616 podStartE2EDuration="31.520161138s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.778876432 +0000 UTC m=+1029.365659881" lastFinishedPulling="2025-10-14 15:06:34.071183954 +0000 UTC m=+1055.657967403" observedRunningTime="2025-10-14 15:06:36.46730884 +0000 UTC m=+1058.054092289" watchObservedRunningTime="2025-10-14 15:06:36.520161138 +0000 UTC m=+1058.106944587" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.600481 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" podStartSLOduration=10.882600766 podStartE2EDuration="32.60046397s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.582417182 +0000 UTC m=+1029.169200631" lastFinishedPulling="2025-10-14 15:06:29.300280396 +0000 UTC m=+1050.887063835" observedRunningTime="2025-10-14 15:06:36.522423963 +0000 UTC m=+1058.109207402" watchObservedRunningTime="2025-10-14 15:06:36.60046397 +0000 UTC m=+1058.187247419" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.657367 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" podStartSLOduration=10.3306321 podStartE2EDuration="32.657349516s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:06.974416181 +0000 UTC m=+1028.561199630" lastFinishedPulling="2025-10-14 15:06:29.301133597 +0000 UTC m=+1050.887917046" observedRunningTime="2025-10-14 15:06:36.655154882 +0000 UTC m=+1058.241938331" watchObservedRunningTime="2025-10-14 15:06:36.657349516 +0000 UTC m=+1058.244132965" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.692556 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" podStartSLOduration=5.413069426 podStartE2EDuration="31.692532146s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.750682071 +0000 UTC m=+1029.337465520" lastFinishedPulling="2025-10-14 15:06:34.030144791 +0000 UTC m=+1055.616928240" observedRunningTime="2025-10-14 15:06:36.689932603 +0000 UTC m=+1058.276716052" watchObservedRunningTime="2025-10-14 15:06:36.692532146 +0000 UTC m=+1058.279315595" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.720140 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" podStartSLOduration=10.410588514 podStartE2EDuration="31.720120074s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.583094419 +0000 UTC m=+1029.169877868" lastFinishedPulling="2025-10-14 15:06:28.892625979 +0000 UTC m=+1050.479409428" observedRunningTime="2025-10-14 15:06:36.717457359 +0000 UTC m=+1058.304240808" watchObservedRunningTime="2025-10-14 15:06:36.720120074 +0000 UTC m=+1058.306903523" Oct 14 15:06:36 crc kubenswrapper[4860]: I1014 15:06:36.742079 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" podStartSLOduration=5.330003076 podStartE2EDuration="31.742059693s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.778266998 +0000 UTC m=+1029.365050447" lastFinishedPulling="2025-10-14 15:06:34.190323625 +0000 UTC m=+1055.777107064" observedRunningTime="2025-10-14 15:06:36.74190583 +0000 UTC m=+1058.328689279" watchObservedRunningTime="2025-10-14 15:06:36.742059693 +0000 UTC m=+1058.328843142" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.175358 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" event={"ID":"e24ba4ef-9297-4d61-a338-941ce00a2391","Type":"ContainerStarted","Data":"270146e529eb0e29390d8cafcdb4a385f6b1cfeb22c2b4b7ef9a7e73f6b12038"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.175528 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.177883 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" event={"ID":"95d281e4-c140-42c3-ba4e-3d36e98bb29c","Type":"ContainerStarted","Data":"f8d419490fbbffb7ed5c3eb51501b75e3c3d0bf03df26d218c07029f6a088e60"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.178391 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.180710 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" event={"ID":"9d1ea96c-cdba-4586-ae97-c008ff1ed05e","Type":"ContainerStarted","Data":"91c56fc6cfae582b8465b1c10161f909f0c8f55a6f86a8783f7101c576f633cc"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.180803 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.183059 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" event={"ID":"65912b78-7ceb-4bd0-ab72-70fd3574b786","Type":"ContainerStarted","Data":"80e731c262ee98e5ee252414b7c31bfb0b029815c67d248c156d6135ccc127aa"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.183369 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.185573 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" event={"ID":"95d178d8-e3b2-4141-91af-b82fa61bd86a","Type":"ContainerStarted","Data":"e7b4552e7271d5e6a846434c9d5b5ed0d01ad59b3105d27ab263adce2d8a75dc"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.185690 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.187815 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" event={"ID":"df4d54ec-6345-4b47-8ae8-58ae0bf6da7f","Type":"ContainerStarted","Data":"4bd2119fc801a1627840a41362be916e2417c6b5598e6898aaf92628287304a4"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.188300 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.191137 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" event={"ID":"e3456832-68ce-443e-825f-9d6af6cf829f","Type":"ContainerStarted","Data":"88c84c5002a5ebcf0b538d807cd1076b447a75063eb500f061056528619e34c2"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.194929 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" event={"ID":"786a4f8b-062c-46b7-8028-5079481427db","Type":"ContainerStarted","Data":"6d64052102518eff4bd2c5b23d9fc09b61b95e76d0f782dbc494a8bd797b00ce"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.194969 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.197724 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" event={"ID":"6b31fe2f-695e-4b8b-b632-7075e4a9740f","Type":"ContainerStarted","Data":"eac06ba15e1bc58d81f4976ed6e6452be8ede80da2b2c556d75945f20991f267"} Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.228841 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" podStartSLOduration=4.621652939 podStartE2EDuration="33.228820933s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.194660256 +0000 UTC m=+1028.781443705" lastFinishedPulling="2025-10-14 15:06:35.80182825 +0000 UTC m=+1057.388611699" observedRunningTime="2025-10-14 15:06:37.225193135 +0000 UTC m=+1058.811976584" watchObservedRunningTime="2025-10-14 15:06:37.228820933 +0000 UTC m=+1058.815604382" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.257536 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" podStartSLOduration=11.617000733 podStartE2EDuration="33.257519507s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.251128221 +0000 UTC m=+1028.837911670" lastFinishedPulling="2025-10-14 15:06:28.891646995 +0000 UTC m=+1050.478430444" observedRunningTime="2025-10-14 15:06:37.256719298 +0000 UTC m=+1058.843502747" watchObservedRunningTime="2025-10-14 15:06:37.257519507 +0000 UTC m=+1058.844302956" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.356553 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" podStartSLOduration=11.068889139 podStartE2EDuration="32.356535891s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:08.013560077 +0000 UTC m=+1029.600343526" lastFinishedPulling="2025-10-14 15:06:29.301206829 +0000 UTC m=+1050.887990278" observedRunningTime="2025-10-14 15:06:37.321693838 +0000 UTC m=+1058.908477297" watchObservedRunningTime="2025-10-14 15:06:37.356535891 +0000 UTC m=+1058.943319340" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.356922 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" podStartSLOduration=4.073395282 podStartE2EDuration="32.35691769s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.717219071 +0000 UTC m=+1029.304002520" lastFinishedPulling="2025-10-14 15:06:36.000741469 +0000 UTC m=+1057.587524928" observedRunningTime="2025-10-14 15:06:37.35400301 +0000 UTC m=+1058.940786469" watchObservedRunningTime="2025-10-14 15:06:37.35691769 +0000 UTC m=+1058.943701139" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.384532 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" podStartSLOduration=11.709715666 podStartE2EDuration="33.384515668s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.218298888 +0000 UTC m=+1028.805082337" lastFinishedPulling="2025-10-14 15:06:28.89309889 +0000 UTC m=+1050.479882339" observedRunningTime="2025-10-14 15:06:37.382907999 +0000 UTC m=+1058.969691448" watchObservedRunningTime="2025-10-14 15:06:37.384515668 +0000 UTC m=+1058.971299117" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.440287 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" podStartSLOduration=4.382674711 podStartE2EDuration="33.440270386s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:06.743925908 +0000 UTC m=+1028.330709367" lastFinishedPulling="2025-10-14 15:06:35.801521593 +0000 UTC m=+1057.388305042" observedRunningTime="2025-10-14 15:06:37.437905818 +0000 UTC m=+1059.024689267" watchObservedRunningTime="2025-10-14 15:06:37.440270386 +0000 UTC m=+1059.027053825" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.466436 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" podStartSLOduration=11.775860476 podStartE2EDuration="33.466419599s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.202375714 +0000 UTC m=+1028.789159163" lastFinishedPulling="2025-10-14 15:06:28.892934837 +0000 UTC m=+1050.479718286" observedRunningTime="2025-10-14 15:06:37.461441038 +0000 UTC m=+1059.048224487" watchObservedRunningTime="2025-10-14 15:06:37.466419599 +0000 UTC m=+1059.053203038" Oct 14 15:06:37 crc kubenswrapper[4860]: I1014 15:06:37.502016 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" podStartSLOduration=4.951070445 podStartE2EDuration="33.502000818s" podCreationTimestamp="2025-10-14 15:06:04 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.252365952 +0000 UTC m=+1028.839149401" lastFinishedPulling="2025-10-14 15:06:35.803296325 +0000 UTC m=+1057.390079774" observedRunningTime="2025-10-14 15:06:37.497884989 +0000 UTC m=+1059.084668438" watchObservedRunningTime="2025-10-14 15:06:37.502000818 +0000 UTC m=+1059.088784267" Oct 14 15:06:38 crc kubenswrapper[4860]: I1014 15:06:38.205197 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" event={"ID":"f9603eeb-cc1b-4dc8-82e6-9cf64109c774","Type":"ContainerStarted","Data":"5686d0a7d96131bf8da6b64ef1b81d15be8b2167a8e6d34030537889e6e91fbf"} Oct 14 15:06:38 crc kubenswrapper[4860]: I1014 15:06:38.206092 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:39 crc kubenswrapper[4860]: I1014 15:06:39.213048 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" event={"ID":"5397040a-47ac-487d-8e5a-8fd02d6ec654","Type":"ContainerStarted","Data":"f8a38669fb1ed81106911a04f53d00e21e2c9d04b40547e942d30c3c7360b988"} Oct 14 15:06:39 crc kubenswrapper[4860]: I1014 15:06:39.213546 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:39 crc kubenswrapper[4860]: I1014 15:06:39.222724 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt" Oct 14 15:06:39 crc kubenswrapper[4860]: I1014 15:06:39.236456 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" podStartSLOduration=3.060204034 podStartE2EDuration="34.236434885s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.11207586 +0000 UTC m=+1028.698859309" lastFinishedPulling="2025-10-14 15:06:38.288306711 +0000 UTC m=+1059.875090160" observedRunningTime="2025-10-14 15:06:39.231946167 +0000 UTC m=+1060.818729616" watchObservedRunningTime="2025-10-14 15:06:39.236434885 +0000 UTC m=+1060.823218334" Oct 14 15:06:39 crc kubenswrapper[4860]: I1014 15:06:39.238476 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" podStartSLOduration=4.024963721 podStartE2EDuration="34.238468534s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.527663708 +0000 UTC m=+1029.114447157" lastFinishedPulling="2025-10-14 15:06:37.741168531 +0000 UTC m=+1059.327951970" observedRunningTime="2025-10-14 15:06:38.239932781 +0000 UTC m=+1059.826716230" watchObservedRunningTime="2025-10-14 15:06:39.238468534 +0000 UTC m=+1060.825251983" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.175012 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-lwpwz" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.197541 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-dgxfp" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.363169 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8ht4q" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.383882 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-59578bc799-bc4x8" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.434534 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-lfntw" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.481382 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-f2rfg" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.482340 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-6lzwd" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.512014 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-8mngx" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.596167 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-mdd5z" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.626980 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-2rpj7" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.628860 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-tw4ph" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.667932 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-l9w8x" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.681721 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.692055 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-5l2qq" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.800761 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-4kql9" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.831808 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-664664cb68-l6rbl" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.902697 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-rpjh4" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.925510 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.930579 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-9m7mm" Oct 14 15:06:45 crc kubenswrapper[4860]: I1014 15:06:45.976059 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-646675d848-lzb7d" Oct 14 15:06:47 crc kubenswrapper[4860]: I1014 15:06:47.076886 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" Oct 14 15:06:49 crc kubenswrapper[4860]: I1014 15:06:49.068135 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:06:51 crc kubenswrapper[4860]: I1014 15:06:51.289149 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" event={"ID":"572e90ee-e3d4-44a0-b3c5-d0005f4cb41c","Type":"ContainerStarted","Data":"efb7c3b89adcc31c7498257a5343ae68cf21bb8ad99cf4c4f69f60d4626320f2"} Oct 14 15:06:51 crc kubenswrapper[4860]: I1014 15:06:51.289660 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:06:51 crc kubenswrapper[4860]: I1014 15:06:51.326579 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" podStartSLOduration=3.144297827 podStartE2EDuration="46.326558724s" podCreationTimestamp="2025-10-14 15:06:05 +0000 UTC" firstStartedPulling="2025-10-14 15:06:07.749211975 +0000 UTC m=+1029.335995424" lastFinishedPulling="2025-10-14 15:06:50.931472882 +0000 UTC m=+1072.518256321" observedRunningTime="2025-10-14 15:06:51.320331853 +0000 UTC m=+1072.907115302" watchObservedRunningTime="2025-10-14 15:06:51.326558724 +0000 UTC m=+1072.913342173" Oct 14 15:06:59 crc kubenswrapper[4860]: I1014 15:06:59.245358 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:06:59 crc kubenswrapper[4860]: I1014 15:06:59.245971 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:07:05 crc kubenswrapper[4860]: I1014 15:07:05.860624 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xpq8w" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.102510 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pzttx"] Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.104739 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.108888 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.108973 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.109589 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.109799 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-h28z2" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.123961 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pzttx"] Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.155865 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74fa4b97-e7b5-4502-ae81-a13af91e5252-config\") pod \"dnsmasq-dns-675f4bcbfc-pzttx\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.156060 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74zj\" (UniqueName: \"kubernetes.io/projected/74fa4b97-e7b5-4502-ae81-a13af91e5252-kube-api-access-m74zj\") pod \"dnsmasq-dns-675f4bcbfc-pzttx\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.207049 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dlcvp"] Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.208426 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.210506 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.257297 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dlcvp"] Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.257788 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74zj\" (UniqueName: \"kubernetes.io/projected/74fa4b97-e7b5-4502-ae81-a13af91e5252-kube-api-access-m74zj\") pod \"dnsmasq-dns-675f4bcbfc-pzttx\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.258008 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74fa4b97-e7b5-4502-ae81-a13af91e5252-config\") pod \"dnsmasq-dns-675f4bcbfc-pzttx\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.258798 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74fa4b97-e7b5-4502-ae81-a13af91e5252-config\") pod \"dnsmasq-dns-675f4bcbfc-pzttx\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.293817 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74zj\" (UniqueName: \"kubernetes.io/projected/74fa4b97-e7b5-4502-ae81-a13af91e5252-kube-api-access-m74zj\") pod \"dnsmasq-dns-675f4bcbfc-pzttx\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.359657 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-config\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.359763 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.359789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xhrw\" (UniqueName: \"kubernetes.io/projected/88e4669c-061b-4de0-9ec2-47a676cfe93c-kube-api-access-5xhrw\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.420917 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.460735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.460792 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xhrw\" (UniqueName: \"kubernetes.io/projected/88e4669c-061b-4de0-9ec2-47a676cfe93c-kube-api-access-5xhrw\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.460864 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-config\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.461634 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-config\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.462185 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.479655 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xhrw\" (UniqueName: \"kubernetes.io/projected/88e4669c-061b-4de0-9ec2-47a676cfe93c-kube-api-access-5xhrw\") pod \"dnsmasq-dns-78dd6ddcc-dlcvp\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.521513 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:25 crc kubenswrapper[4860]: I1014 15:07:25.942505 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pzttx"] Oct 14 15:07:26 crc kubenswrapper[4860]: I1014 15:07:26.053928 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dlcvp"] Oct 14 15:07:26 crc kubenswrapper[4860]: W1014 15:07:26.062257 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88e4669c_061b_4de0_9ec2_47a676cfe93c.slice/crio-c5f187e941701bede299b5dd27be3f031d4cb88cb6f1dca33352f1f113d2232a WatchSource:0}: Error finding container c5f187e941701bede299b5dd27be3f031d4cb88cb6f1dca33352f1f113d2232a: Status 404 returned error can't find the container with id c5f187e941701bede299b5dd27be3f031d4cb88cb6f1dca33352f1f113d2232a Oct 14 15:07:26 crc kubenswrapper[4860]: I1014 15:07:26.518656 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" event={"ID":"74fa4b97-e7b5-4502-ae81-a13af91e5252","Type":"ContainerStarted","Data":"12566d043a28b79064782ffc2d66694cf70d828be9ee8f7c33ba211e11f03239"} Oct 14 15:07:26 crc kubenswrapper[4860]: I1014 15:07:26.520707 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" event={"ID":"88e4669c-061b-4de0-9ec2-47a676cfe93c","Type":"ContainerStarted","Data":"c5f187e941701bede299b5dd27be3f031d4cb88cb6f1dca33352f1f113d2232a"} Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.416840 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pzttx"] Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.485846 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-znf2c"] Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.486982 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.517900 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lst8f\" (UniqueName: \"kubernetes.io/projected/022b3ae4-c617-4666-afc6-874b561926f4-kube-api-access-lst8f\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.517972 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-config\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.518023 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.524794 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-znf2c"] Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.625064 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-config\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.625308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.625459 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lst8f\" (UniqueName: \"kubernetes.io/projected/022b3ae4-c617-4666-afc6-874b561926f4-kube-api-access-lst8f\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.626314 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-config\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.626941 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.695265 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lst8f\" (UniqueName: \"kubernetes.io/projected/022b3ae4-c617-4666-afc6-874b561926f4-kube-api-access-lst8f\") pod \"dnsmasq-dns-666b6646f7-znf2c\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:28 crc kubenswrapper[4860]: I1014 15:07:28.810199 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.036195 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dlcvp"] Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.203147 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-95xfl"] Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.204383 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.204848 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-95xfl"] Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.248235 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.248281 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.248318 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.249005 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c40d8caa5e52e82b9243eb5410bd9850080abe3ed1c63b68f1d1d3b4330efe8"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.249125 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://7c40d8caa5e52e82b9243eb5410bd9850080abe3ed1c63b68f1d1d3b4330efe8" gracePeriod=600 Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.292939 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpx94\" (UniqueName: \"kubernetes.io/projected/9138e3ca-610f-4970-984b-626c6aab739d-kube-api-access-hpx94\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.294917 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.294979 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-config\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.396100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.396167 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-config\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.396265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpx94\" (UniqueName: \"kubernetes.io/projected/9138e3ca-610f-4970-984b-626c6aab739d-kube-api-access-hpx94\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.397732 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.398530 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-config\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.422304 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpx94\" (UniqueName: \"kubernetes.io/projected/9138e3ca-610f-4970-984b-626c6aab739d-kube-api-access-hpx94\") pod \"dnsmasq-dns-57d769cc4f-95xfl\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.530015 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.574104 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="7c40d8caa5e52e82b9243eb5410bd9850080abe3ed1c63b68f1d1d3b4330efe8" exitCode=0 Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.574167 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"7c40d8caa5e52e82b9243eb5410bd9850080abe3ed1c63b68f1d1d3b4330efe8"} Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.574224 4860 scope.go:117] "RemoveContainer" containerID="35f60ae25f79186f53f554e65dfb897f3e59fbee448cf25d36669e90dcf31a8b" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.575910 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-znf2c"] Oct 14 15:07:29 crc kubenswrapper[4860]: W1014 15:07:29.603862 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod022b3ae4_c617_4666_afc6_874b561926f4.slice/crio-10c0befd40d7296496e78a85c848a7ab8800f731c2d0573e15c03ef91bdcd462 WatchSource:0}: Error finding container 10c0befd40d7296496e78a85c848a7ab8800f731c2d0573e15c03ef91bdcd462: Status 404 returned error can't find the container with id 10c0befd40d7296496e78a85c848a7ab8800f731c2d0573e15c03ef91bdcd462 Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.747101 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.749365 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.761821 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.762329 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.762625 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-w8xwd" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.762891 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.763240 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.763357 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.763545 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.764237 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907150 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfc4p\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-kube-api-access-vfc4p\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907219 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90824b73-8623-495c-8bed-fdc67bff987a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907252 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907381 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907452 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90824b73-8623-495c-8bed-fdc67bff987a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907514 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907613 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907724 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:29 crc kubenswrapper[4860]: I1014 15:07:29.907836 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-config-data\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009648 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009700 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90824b73-8623-495c-8bed-fdc67bff987a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009750 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009790 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009822 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90824b73-8623-495c-8bed-fdc67bff987a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009899 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009932 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009967 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.009989 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-config-data\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.010012 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfc4p\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-kube-api-access-vfc4p\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.010232 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.010882 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.011906 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.012343 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-config-data\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.012592 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.012667 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.015526 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90824b73-8623-495c-8bed-fdc67bff987a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.015648 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90824b73-8623-495c-8bed-fdc67bff987a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.020918 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.023964 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.026401 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfc4p\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-kube-api-access-vfc4p\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.064135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.110311 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.157330 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-95xfl"] Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.201689 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.208529 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.215044 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.215293 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bm47p" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.215418 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.217601 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.217742 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.217859 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.218001 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.228808 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:07:30 crc kubenswrapper[4860]: W1014 15:07:30.237974 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9138e3ca_610f_4970_984b_626c6aab739d.slice/crio-319974c90b4e0c8471c05a29e5710a6db7d9cb9c0cb4edb3c413b788f9a15911 WatchSource:0}: Error finding container 319974c90b4e0c8471c05a29e5710a6db7d9cb9c0cb4edb3c413b788f9a15911: Status 404 returned error can't find the container with id 319974c90b4e0c8471c05a29e5710a6db7d9cb9c0cb4edb3c413b788f9a15911 Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.342109 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.342173 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.343752 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.343784 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.343857 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.343876 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.343931 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1afb1fa-9423-4ef6-a771-76c666ca1038-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.344057 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.344122 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.344138 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1afb1fa-9423-4ef6-a771-76c666ca1038-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.344154 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kjfj\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-kube-api-access-6kjfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457739 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457800 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457825 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1afb1fa-9423-4ef6-a771-76c666ca1038-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457843 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kjfj\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-kube-api-access-6kjfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457888 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457911 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457931 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.457985 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.458004 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.458019 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1afb1fa-9423-4ef6-a771-76c666ca1038-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.458189 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.458550 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.467502 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.469611 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1afb1fa-9423-4ef6-a771-76c666ca1038-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.475374 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.479619 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.480810 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1afb1fa-9423-4ef6-a771-76c666ca1038-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.481721 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kjfj\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-kube-api-access-6kjfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.486868 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.487890 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.491567 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.511778 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.575052 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.594874 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"8e8c816816ac6aa5296d7e14d541eea35fcda7f2a88ab8bc1a07386f6df3b2dd"} Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.596620 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" event={"ID":"022b3ae4-c617-4666-afc6-874b561926f4","Type":"ContainerStarted","Data":"10c0befd40d7296496e78a85c848a7ab8800f731c2d0573e15c03ef91bdcd462"} Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.598078 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" event={"ID":"9138e3ca-610f-4970-984b-626c6aab739d","Type":"ContainerStarted","Data":"319974c90b4e0c8471c05a29e5710a6db7d9cb9c0cb4edb3c413b788f9a15911"} Oct 14 15:07:30 crc kubenswrapper[4860]: I1014 15:07:30.755982 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:07:30 crc kubenswrapper[4860]: W1014 15:07:30.786607 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90824b73_8623_495c_8bed_fdc67bff987a.slice/crio-006a89cadc8bd4f2664d15fae504fcb3d11d8daad22fcb0022c4b639ed6c6199 WatchSource:0}: Error finding container 006a89cadc8bd4f2664d15fae504fcb3d11d8daad22fcb0022c4b639ed6c6199: Status 404 returned error can't find the container with id 006a89cadc8bd4f2664d15fae504fcb3d11d8daad22fcb0022c4b639ed6c6199 Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.240829 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.243003 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.245796 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.246579 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.251407 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.251630 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.251690 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-s5vss" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.252760 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.276210 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.321674 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.395623 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42j65\" (UniqueName: \"kubernetes.io/projected/0619b1f4-ea36-41ab-a97b-2a97d516e53c-kube-api-access-42j65\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.395945 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-secrets\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.395976 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.396043 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.396060 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.396080 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.396185 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.396242 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0619b1f4-ea36-41ab-a97b-2a97d516e53c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.396313 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497822 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497883 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497902 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497925 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0619b1f4-ea36-41ab-a97b-2a97d516e53c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497960 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.497993 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42j65\" (UniqueName: \"kubernetes.io/projected/0619b1f4-ea36-41ab-a97b-2a97d516e53c-kube-api-access-42j65\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.498037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-secrets\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.498058 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.498449 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.499666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.499986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0619b1f4-ea36-41ab-a97b-2a97d516e53c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.500650 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.501115 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0619b1f4-ea36-41ab-a97b-2a97d516e53c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.507291 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-secrets\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.507790 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.519821 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0619b1f4-ea36-41ab-a97b-2a97d516e53c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.520018 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42j65\" (UniqueName: \"kubernetes.io/projected/0619b1f4-ea36-41ab-a97b-2a97d516e53c-kube-api-access-42j65\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.540656 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0619b1f4-ea36-41ab-a97b-2a97d516e53c\") " pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.563533 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.620988 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d1afb1fa-9423-4ef6-a771-76c666ca1038","Type":"ContainerStarted","Data":"abe59e8723c49469ae04348fd61f230f0b1f62c256c599cf0c4d5af2e422a417"} Oct 14 15:07:31 crc kubenswrapper[4860]: I1014 15:07:31.649777 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90824b73-8623-495c-8bed-fdc67bff987a","Type":"ContainerStarted","Data":"006a89cadc8bd4f2664d15fae504fcb3d11d8daad22fcb0022c4b639ed6c6199"} Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.293708 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 15:07:32 crc kubenswrapper[4860]: W1014 15:07:32.337253 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0619b1f4_ea36_41ab_a97b_2a97d516e53c.slice/crio-ab732f244cd489bc3ab3a9811ead2a53f49ee05c8f2c201fd248c2e8d15de639 WatchSource:0}: Error finding container ab732f244cd489bc3ab3a9811ead2a53f49ee05c8f2c201fd248c2e8d15de639: Status 404 returned error can't find the container with id ab732f244cd489bc3ab3a9811ead2a53f49ee05c8f2c201fd248c2e8d15de639 Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.664765 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.667922 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.670172 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.670246 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.670870 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.670948 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mkfwh" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.672151 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.685624 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0619b1f4-ea36-41ab-a97b-2a97d516e53c","Type":"ContainerStarted","Data":"ab732f244cd489bc3ab3a9811ead2a53f49ee05c8f2c201fd248c2e8d15de639"} Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.825520 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.825700 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.825789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.825931 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxmt\" (UniqueName: \"kubernetes.io/projected/84bf98f8-38a7-469a-a6ce-f3b573aa1356-kube-api-access-vkxmt\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.825988 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.826067 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.826123 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/84bf98f8-38a7-469a-a6ce-f3b573aa1356-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.826229 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.826291 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.927731 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxmt\" (UniqueName: \"kubernetes.io/projected/84bf98f8-38a7-469a-a6ce-f3b573aa1356-kube-api-access-vkxmt\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.927800 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.927822 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.930215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/84bf98f8-38a7-469a-a6ce-f3b573aa1356-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.930311 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.930374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.930484 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.931195 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.931262 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.931443 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.931696 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/84bf98f8-38a7-469a-a6ce-f3b573aa1356-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.934205 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.934283 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.940597 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bf98f8-38a7-469a-a6ce-f3b573aa1356-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.967347 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.970457 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:32 crc kubenswrapper[4860]: I1014 15:07:32.993376 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/84bf98f8-38a7-469a-a6ce-f3b573aa1356-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.001251 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxmt\" (UniqueName: \"kubernetes.io/projected/84bf98f8-38a7-469a-a6ce-f3b573aa1356-kube-api-access-vkxmt\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.038316 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"84bf98f8-38a7-469a-a6ce-f3b573aa1356\") " pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.040135 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.043162 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.045793 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xjfkb" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.046098 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.046254 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.138327 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03a44669-ea47-471b-a369-93f6f85bec6b-kolla-config\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.138426 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a44669-ea47-471b-a369-93f6f85bec6b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.138473 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p74f\" (UniqueName: \"kubernetes.io/projected/03a44669-ea47-471b-a369-93f6f85bec6b-kube-api-access-7p74f\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.138539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a44669-ea47-471b-a369-93f6f85bec6b-config-data\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.138569 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03a44669-ea47-471b-a369-93f6f85bec6b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.152221 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.248298 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a44669-ea47-471b-a369-93f6f85bec6b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.248384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p74f\" (UniqueName: \"kubernetes.io/projected/03a44669-ea47-471b-a369-93f6f85bec6b-kube-api-access-7p74f\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.249154 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a44669-ea47-471b-a369-93f6f85bec6b-config-data\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.249232 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03a44669-ea47-471b-a369-93f6f85bec6b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.249296 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03a44669-ea47-471b-a369-93f6f85bec6b-kolla-config\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.250067 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03a44669-ea47-471b-a369-93f6f85bec6b-kolla-config\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.251620 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a44669-ea47-471b-a369-93f6f85bec6b-config-data\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.294341 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03a44669-ea47-471b-a369-93f6f85bec6b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.296115 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a44669-ea47-471b-a369-93f6f85bec6b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.296669 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.297557 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p74f\" (UniqueName: \"kubernetes.io/projected/03a44669-ea47-471b-a369-93f6f85bec6b-kube-api-access-7p74f\") pod \"memcached-0\" (UID: \"03a44669-ea47-471b-a369-93f6f85bec6b\") " pod="openstack/memcached-0" Oct 14 15:07:33 crc kubenswrapper[4860]: I1014 15:07:33.420942 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.584344 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.585411 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.591246 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mbllm" Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.596485 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.687164 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4f4\" (UniqueName: \"kubernetes.io/projected/22d1e6d4-a98e-457e-9e99-8e2f4319031b-kube-api-access-hn4f4\") pod \"kube-state-metrics-0\" (UID: \"22d1e6d4-a98e-457e-9e99-8e2f4319031b\") " pod="openstack/kube-state-metrics-0" Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.791798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn4f4\" (UniqueName: \"kubernetes.io/projected/22d1e6d4-a98e-457e-9e99-8e2f4319031b-kube-api-access-hn4f4\") pod \"kube-state-metrics-0\" (UID: \"22d1e6d4-a98e-457e-9e99-8e2f4319031b\") " pod="openstack/kube-state-metrics-0" Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.812161 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn4f4\" (UniqueName: \"kubernetes.io/projected/22d1e6d4-a98e-457e-9e99-8e2f4319031b-kube-api-access-hn4f4\") pod \"kube-state-metrics-0\" (UID: \"22d1e6d4-a98e-457e-9e99-8e2f4319031b\") " pod="openstack/kube-state-metrics-0" Oct 14 15:07:34 crc kubenswrapper[4860]: I1014 15:07:34.924139 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.391058 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sc6wm"] Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.392547 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.397584 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.397687 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.397929 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-clm24" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.413396 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sc6wm"] Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.473410 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fbd86ca-1d38-4b27-bd36-62198c367b3d-scripts\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.473471 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qctr\" (UniqueName: \"kubernetes.io/projected/8fbd86ca-1d38-4b27-bd36-62198c367b3d-kube-api-access-7qctr\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.476117 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-run\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.476261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-run-ovn\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.476352 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbd86ca-1d38-4b27-bd36-62198c367b3d-combined-ca-bundle\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.476414 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd86ca-1d38-4b27-bd36-62198c367b3d-ovn-controller-tls-certs\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.476438 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-log-ovn\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.482641 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vbhtr"] Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.485747 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.489077 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vbhtr"] Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577654 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-log-ovn\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577730 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-run\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fbd86ca-1d38-4b27-bd36-62198c367b3d-scripts\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577816 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qctr\" (UniqueName: \"kubernetes.io/projected/8fbd86ca-1d38-4b27-bd36-62198c367b3d-kube-api-access-7qctr\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577844 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-lib\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577880 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-run\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577908 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqd2\" (UniqueName: \"kubernetes.io/projected/517eb23f-ec49-4288-a019-df9ac4da8ccd-kube-api-access-trqd2\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577955 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-log\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.577990 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-run-ovn\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.578013 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-etc-ovs\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.578061 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/517eb23f-ec49-4288-a019-df9ac4da8ccd-scripts\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.578098 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbd86ca-1d38-4b27-bd36-62198c367b3d-combined-ca-bundle\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.578140 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd86ca-1d38-4b27-bd36-62198c367b3d-ovn-controller-tls-certs\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.578630 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-run\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.578865 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-log-ovn\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.579500 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fbd86ca-1d38-4b27-bd36-62198c367b3d-var-run-ovn\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.580557 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fbd86ca-1d38-4b27-bd36-62198c367b3d-scripts\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.583767 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd86ca-1d38-4b27-bd36-62198c367b3d-ovn-controller-tls-certs\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.590401 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbd86ca-1d38-4b27-bd36-62198c367b3d-combined-ca-bundle\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.598505 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qctr\" (UniqueName: \"kubernetes.io/projected/8fbd86ca-1d38-4b27-bd36-62198c367b3d-kube-api-access-7qctr\") pod \"ovn-controller-sc6wm\" (UID: \"8fbd86ca-1d38-4b27-bd36-62198c367b3d\") " pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678716 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-lib\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678773 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trqd2\" (UniqueName: \"kubernetes.io/projected/517eb23f-ec49-4288-a019-df9ac4da8ccd-kube-api-access-trqd2\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678807 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-log\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678830 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-etc-ovs\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/517eb23f-ec49-4288-a019-df9ac4da8ccd-scripts\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678886 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-run\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.678994 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-run\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.679167 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-lib\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.679472 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-var-log\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.679591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/517eb23f-ec49-4288-a019-df9ac4da8ccd-etc-ovs\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.681388 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/517eb23f-ec49-4288-a019-df9ac4da8ccd-scripts\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.699872 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trqd2\" (UniqueName: \"kubernetes.io/projected/517eb23f-ec49-4288-a019-df9ac4da8ccd-kube-api-access-trqd2\") pod \"ovn-controller-ovs-vbhtr\" (UID: \"517eb23f-ec49-4288-a019-df9ac4da8ccd\") " pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.709174 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.805899 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.807132 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.814020 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.814227 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.814334 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9f568" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.814551 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.814743 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.815652 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.820842 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.983981 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984063 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984093 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984112 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-config\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984153 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984186 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984219 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:38 crc kubenswrapper[4860]: I1014 15:07:38.984251 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44dl\" (UniqueName: \"kubernetes.io/projected/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-kube-api-access-b44dl\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.085772 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.085835 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b44dl\" (UniqueName: \"kubernetes.io/projected/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-kube-api-access-b44dl\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.085881 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.085914 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.085943 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.085968 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-config\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.086016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.092726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.086825 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.087230 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.087326 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-config\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.086525 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.099689 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.102783 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b44dl\" (UniqueName: \"kubernetes.io/projected/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-kube-api-access-b44dl\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.103132 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.106851 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea3e827-d3d5-481d-b8f6-90b20be97f2e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.118589 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9ea3e827-d3d5-481d-b8f6-90b20be97f2e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:39 crc kubenswrapper[4860]: I1014 15:07:39.140482 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.063132 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.065303 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.069148 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.069282 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-97dt6" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.069564 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.069565 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.073885 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.157643 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.157982 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.158453 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.158795 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.158977 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.159115 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-config\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.159230 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.159384 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgxw\" (UniqueName: \"kubernetes.io/projected/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-kube-api-access-cbgxw\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260758 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260780 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260803 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-config\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260830 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgxw\" (UniqueName: \"kubernetes.io/projected/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-kube-api-access-cbgxw\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.260903 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.261525 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.261593 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.262155 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-config\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.262480 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.267513 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.269368 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.273043 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.278239 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgxw\" (UniqueName: \"kubernetes.io/projected/ac3dbbff-ef4c-461d-b2a0-58284b598cb4-kube-api-access-cbgxw\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.282210 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ac3dbbff-ef4c-461d-b2a0-58284b598cb4\") " pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:42 crc kubenswrapper[4860]: I1014 15:07:42.392771 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 15:07:54 crc kubenswrapper[4860]: I1014 15:07:54.872398 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 15:07:55 crc kubenswrapper[4860]: W1014 15:07:55.519134 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84bf98f8_38a7_469a_a6ce_f3b573aa1356.slice/crio-01dab348075cb5365f1ac8eb0df69f0e013ba087aa01791928fba2e39b751a03 WatchSource:0}: Error finding container 01dab348075cb5365f1ac8eb0df69f0e013ba087aa01791928fba2e39b751a03: Status 404 returned error can't find the container with id 01dab348075cb5365f1ac8eb0df69f0e013ba087aa01791928fba2e39b751a03 Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.559462 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.559604 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m74zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pzttx_openstack(74fa4b97-e7b5-4502-ae81-a13af91e5252): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.560748 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" podUID="74fa4b97-e7b5-4502-ae81-a13af91e5252" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.588919 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.589071 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpx94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-95xfl_openstack(9138e3ca-610f-4970-984b-626c6aab739d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.590186 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" podUID="9138e3ca-610f-4970-984b-626c6aab739d" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.639554 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.640131 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xhrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-dlcvp_openstack(88e4669c-061b-4de0-9ec2-47a676cfe93c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.641296 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" podUID="88e4669c-061b-4de0-9ec2-47a676cfe93c" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.756437 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.756730 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lst8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-znf2c_openstack(022b3ae4-c617-4666-afc6-874b561926f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.758338 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" podUID="022b3ae4-c617-4666-afc6-874b561926f4" Oct 14 15:07:55 crc kubenswrapper[4860]: I1014 15:07:55.934968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"84bf98f8-38a7-469a-a6ce-f3b573aa1356","Type":"ContainerStarted","Data":"01dab348075cb5365f1ac8eb0df69f0e013ba087aa01791928fba2e39b751a03"} Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.937501 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" podUID="9138e3ca-610f-4970-984b-626c6aab739d" Oct 14 15:07:55 crc kubenswrapper[4860]: E1014 15:07:55.938617 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" podUID="022b3ae4-c617-4666-afc6-874b561926f4" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.202423 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.227058 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sc6wm"] Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.393605 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.477775 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.571056 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:07:56 crc kubenswrapper[4860]: W1014 15:07:56.573148 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d1e6d4_a98e_457e_9e99_8e2f4319031b.slice/crio-f191838b78886f436c9c36b8e9d5a93bc4b6b27959fcc80b2d3bff665262471f WatchSource:0}: Error finding container f191838b78886f436c9c36b8e9d5a93bc4b6b27959fcc80b2d3bff665262471f: Status 404 returned error can't find the container with id f191838b78886f436c9c36b8e9d5a93bc4b6b27959fcc80b2d3bff665262471f Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.582873 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xhrw\" (UniqueName: \"kubernetes.io/projected/88e4669c-061b-4de0-9ec2-47a676cfe93c-kube-api-access-5xhrw\") pod \"88e4669c-061b-4de0-9ec2-47a676cfe93c\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.583012 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74zj\" (UniqueName: \"kubernetes.io/projected/74fa4b97-e7b5-4502-ae81-a13af91e5252-kube-api-access-m74zj\") pod \"74fa4b97-e7b5-4502-ae81-a13af91e5252\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.583052 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-config\") pod \"88e4669c-061b-4de0-9ec2-47a676cfe93c\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.583148 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74fa4b97-e7b5-4502-ae81-a13af91e5252-config\") pod \"74fa4b97-e7b5-4502-ae81-a13af91e5252\" (UID: \"74fa4b97-e7b5-4502-ae81-a13af91e5252\") " Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.583194 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-dns-svc\") pod \"88e4669c-061b-4de0-9ec2-47a676cfe93c\" (UID: \"88e4669c-061b-4de0-9ec2-47a676cfe93c\") " Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.584014 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88e4669c-061b-4de0-9ec2-47a676cfe93c" (UID: "88e4669c-061b-4de0-9ec2-47a676cfe93c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.584005 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-config" (OuterVolumeSpecName: "config") pod "88e4669c-061b-4de0-9ec2-47a676cfe93c" (UID: "88e4669c-061b-4de0-9ec2-47a676cfe93c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.585472 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74fa4b97-e7b5-4502-ae81-a13af91e5252-config" (OuterVolumeSpecName: "config") pod "74fa4b97-e7b5-4502-ae81-a13af91e5252" (UID: "74fa4b97-e7b5-4502-ae81-a13af91e5252"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.592313 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fa4b97-e7b5-4502-ae81-a13af91e5252-kube-api-access-m74zj" (OuterVolumeSpecName: "kube-api-access-m74zj") pod "74fa4b97-e7b5-4502-ae81-a13af91e5252" (UID: "74fa4b97-e7b5-4502-ae81-a13af91e5252"). InnerVolumeSpecName "kube-api-access-m74zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.592668 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e4669c-061b-4de0-9ec2-47a676cfe93c-kube-api-access-5xhrw" (OuterVolumeSpecName: "kube-api-access-5xhrw") pod "88e4669c-061b-4de0-9ec2-47a676cfe93c" (UID: "88e4669c-061b-4de0-9ec2-47a676cfe93c"). InnerVolumeSpecName "kube-api-access-5xhrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.684913 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74fa4b97-e7b5-4502-ae81-a13af91e5252-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.684959 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.684973 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xhrw\" (UniqueName: \"kubernetes.io/projected/88e4669c-061b-4de0-9ec2-47a676cfe93c-kube-api-access-5xhrw\") on node \"crc\" DevicePath \"\"" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.684988 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74zj\" (UniqueName: \"kubernetes.io/projected/74fa4b97-e7b5-4502-ae81-a13af91e5252-kube-api-access-m74zj\") on node \"crc\" DevicePath \"\"" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.685001 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e4669c-061b-4de0-9ec2-47a676cfe93c-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.719896 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.941549 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9ea3e827-d3d5-481d-b8f6-90b20be97f2e","Type":"ContainerStarted","Data":"25eccb48cf5576cd78bfb054a61b215308541710a3044c77e6fd57e77c1945bb"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.944589 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22d1e6d4-a98e-457e-9e99-8e2f4319031b","Type":"ContainerStarted","Data":"f191838b78886f436c9c36b8e9d5a93bc4b6b27959fcc80b2d3bff665262471f"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.956113 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0619b1f4-ea36-41ab-a97b-2a97d516e53c","Type":"ContainerStarted","Data":"b0c4674217ed197116ed6a4747489ae0525d70bfb6099af3ff0860eec98d67ee"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.958535 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm" event={"ID":"8fbd86ca-1d38-4b27-bd36-62198c367b3d","Type":"ContainerStarted","Data":"b8c39cf9accbffa97032bdcb989cb9c23b07778b6ab850f6ad2ef9a5a6c8943a"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.961175 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"84bf98f8-38a7-469a-a6ce-f3b573aa1356","Type":"ContainerStarted","Data":"3adde6a89492d8e1cc51667e3b6df018b768336039043eae4f67d1763302f172"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.964318 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"03a44669-ea47-471b-a369-93f6f85bec6b","Type":"ContainerStarted","Data":"4b410a62f54e7c4d737b2185c6344f72f9c6c055daff4034e9929003d3c22e2a"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.965669 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" event={"ID":"88e4669c-061b-4de0-9ec2-47a676cfe93c","Type":"ContainerDied","Data":"c5f187e941701bede299b5dd27be3f031d4cb88cb6f1dca33352f1f113d2232a"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.965733 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dlcvp" Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.983506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" event={"ID":"74fa4b97-e7b5-4502-ae81-a13af91e5252","Type":"ContainerDied","Data":"12566d043a28b79064782ffc2d66694cf70d828be9ee8f7c33ba211e11f03239"} Oct 14 15:07:56 crc kubenswrapper[4860]: I1014 15:07:56.983592 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pzttx" Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.090852 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pzttx"] Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.106741 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pzttx"] Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.120430 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dlcvp"] Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.125518 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dlcvp"] Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.453587 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vbhtr"] Oct 14 15:07:57 crc kubenswrapper[4860]: W1014 15:07:57.467887 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod517eb23f_ec49_4288_a019_df9ac4da8ccd.slice/crio-0224e9df020d5b4d87e2aef861f85bffc6e077352fbc5bee26131927aebea222 WatchSource:0}: Error finding container 0224e9df020d5b4d87e2aef861f85bffc6e077352fbc5bee26131927aebea222: Status 404 returned error can't find the container with id 0224e9df020d5b4d87e2aef861f85bffc6e077352fbc5bee26131927aebea222 Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.533566 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.995056 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90824b73-8623-495c-8bed-fdc67bff987a","Type":"ContainerStarted","Data":"7edec3391beba6df09d7c2f4269589118180147c8d551fdd5fca8075f23ccbba"} Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.996587 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vbhtr" event={"ID":"517eb23f-ec49-4288-a019-df9ac4da8ccd","Type":"ContainerStarted","Data":"0224e9df020d5b4d87e2aef861f85bffc6e077352fbc5bee26131927aebea222"} Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.997951 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ac3dbbff-ef4c-461d-b2a0-58284b598cb4","Type":"ContainerStarted","Data":"d02f805aea91f636a5ee23fefe034da3e7fe10e9869853818b6be5c84b5f10c0"} Oct 14 15:07:57 crc kubenswrapper[4860]: I1014 15:07:57.999663 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d1afb1fa-9423-4ef6-a771-76c666ca1038","Type":"ContainerStarted","Data":"527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a"} Oct 14 15:07:59 crc kubenswrapper[4860]: I1014 15:07:59.088445 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fa4b97-e7b5-4502-ae81-a13af91e5252" path="/var/lib/kubelet/pods/74fa4b97-e7b5-4502-ae81-a13af91e5252/volumes" Oct 14 15:07:59 crc kubenswrapper[4860]: I1014 15:07:59.092022 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e4669c-061b-4de0-9ec2-47a676cfe93c" path="/var/lib/kubelet/pods/88e4669c-061b-4de0-9ec2-47a676cfe93c/volumes" Oct 14 15:08:09 crc kubenswrapper[4860]: E1014 15:08:09.102599 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Oct 14 15:08:09 crc kubenswrapper[4860]: E1014 15:08:09.104666 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n55dh676hchd4h56ch59ch5cdh79h66ch694hcch55chb7h5bdh568h58dhch697h64dh65dh544h55fhfdh67fh5d9hch694h6ch577h5fbh598h79q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p74f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(03a44669-ea47-471b-a369-93f6f85bec6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:08:09 crc kubenswrapper[4860]: E1014 15:08:09.105992 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="03a44669-ea47-471b-a369-93f6f85bec6b" Oct 14 15:08:10 crc kubenswrapper[4860]: E1014 15:08:10.090075 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="03a44669-ea47-471b-a369-93f6f85bec6b" Oct 14 15:08:12 crc kubenswrapper[4860]: I1014 15:08:12.105203 4860 generic.go:334] "Generic (PLEG): container finished" podID="0619b1f4-ea36-41ab-a97b-2a97d516e53c" containerID="b0c4674217ed197116ed6a4747489ae0525d70bfb6099af3ff0860eec98d67ee" exitCode=0 Oct 14 15:08:12 crc kubenswrapper[4860]: I1014 15:08:12.105284 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0619b1f4-ea36-41ab-a97b-2a97d516e53c","Type":"ContainerDied","Data":"b0c4674217ed197116ed6a4747489ae0525d70bfb6099af3ff0860eec98d67ee"} Oct 14 15:08:12 crc kubenswrapper[4860]: I1014 15:08:12.107629 4860 generic.go:334] "Generic (PLEG): container finished" podID="84bf98f8-38a7-469a-a6ce-f3b573aa1356" containerID="3adde6a89492d8e1cc51667e3b6df018b768336039043eae4f67d1763302f172" exitCode=0 Oct 14 15:08:12 crc kubenswrapper[4860]: I1014 15:08:12.107663 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"84bf98f8-38a7-469a-a6ce-f3b573aa1356","Type":"ContainerDied","Data":"3adde6a89492d8e1cc51667e3b6df018b768336039043eae4f67d1763302f172"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.123992 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9ea3e827-d3d5-481d-b8f6-90b20be97f2e","Type":"ContainerStarted","Data":"98935cecedccb2b6566f043037680fe1e371cf30ba209c0e6f74bbb171ac4544"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.136209 4860 generic.go:334] "Generic (PLEG): container finished" podID="022b3ae4-c617-4666-afc6-874b561926f4" containerID="2a37aafc386a936a15dbb5e3a28bf4922075ea31d5d8c58daf41a1ba2d807025" exitCode=0 Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.136300 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" event={"ID":"022b3ae4-c617-4666-afc6-874b561926f4","Type":"ContainerDied","Data":"2a37aafc386a936a15dbb5e3a28bf4922075ea31d5d8c58daf41a1ba2d807025"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.141574 4860 generic.go:334] "Generic (PLEG): container finished" podID="9138e3ca-610f-4970-984b-626c6aab739d" containerID="235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973" exitCode=0 Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.141631 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" event={"ID":"9138e3ca-610f-4970-984b-626c6aab739d","Type":"ContainerDied","Data":"235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.144562 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0619b1f4-ea36-41ab-a97b-2a97d516e53c","Type":"ContainerStarted","Data":"267d43fbe01eb9d290d4a54d7af81db0f01488916e37d4c5464f8954cfbc3d27"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.162230 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"84bf98f8-38a7-469a-a6ce-f3b573aa1356","Type":"ContainerStarted","Data":"5f519e4e9387f37a834db85aa3a8a176b58192c6c2cd557680b0012b8495d778"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.165980 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ac3dbbff-ef4c-461d-b2a0-58284b598cb4","Type":"ContainerStarted","Data":"391a1abbd0016b7f9a0dfa028449f8cec642d7115abcd5037c30f0fa80494aed"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.168019 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22d1e6d4-a98e-457e-9e99-8e2f4319031b","Type":"ContainerStarted","Data":"8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.168531 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.170719 4860 generic.go:334] "Generic (PLEG): container finished" podID="517eb23f-ec49-4288-a019-df9ac4da8ccd" containerID="8a8818a575b5e636a71c1992b1456a190bb2cd00b35563798603769daf6284eb" exitCode=0 Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.170765 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vbhtr" event={"ID":"517eb23f-ec49-4288-a019-df9ac4da8ccd","Type":"ContainerDied","Data":"8a8818a575b5e636a71c1992b1456a190bb2cd00b35563798603769daf6284eb"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.174357 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm" event={"ID":"8fbd86ca-1d38-4b27-bd36-62198c367b3d","Type":"ContainerStarted","Data":"794b8fa69620298344043da6e436379cb2504f959f8de1c8b9c0ed1685e288f6"} Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.174547 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sc6wm" Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.219950 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.046142361 podStartE2EDuration="44.219925709s" podCreationTimestamp="2025-10-14 15:07:30 +0000 UTC" firstStartedPulling="2025-10-14 15:07:32.36019664 +0000 UTC m=+1113.946980089" lastFinishedPulling="2025-10-14 15:07:55.533979988 +0000 UTC m=+1137.120763437" observedRunningTime="2025-10-14 15:08:14.1980417 +0000 UTC m=+1155.784825149" watchObservedRunningTime="2025-10-14 15:08:14.219925709 +0000 UTC m=+1155.806709158" Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.241364 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.780372813 podStartE2EDuration="40.241341117s" podCreationTimestamp="2025-10-14 15:07:34 +0000 UTC" firstStartedPulling="2025-10-14 15:07:56.575556791 +0000 UTC m=+1138.162340240" lastFinishedPulling="2025-10-14 15:08:13.036525095 +0000 UTC m=+1154.623308544" observedRunningTime="2025-10-14 15:08:14.230412393 +0000 UTC m=+1155.817195842" watchObservedRunningTime="2025-10-14 15:08:14.241341117 +0000 UTC m=+1155.828124566" Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.254325 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sc6wm" podStartSLOduration=20.918385 podStartE2EDuration="36.254308561s" podCreationTimestamp="2025-10-14 15:07:38 +0000 UTC" firstStartedPulling="2025-10-14 15:07:56.2326045 +0000 UTC m=+1137.819387949" lastFinishedPulling="2025-10-14 15:08:11.568528061 +0000 UTC m=+1153.155311510" observedRunningTime="2025-10-14 15:08:14.251416611 +0000 UTC m=+1155.838200060" watchObservedRunningTime="2025-10-14 15:08:14.254308561 +0000 UTC m=+1155.841092010" Oct 14 15:08:14 crc kubenswrapper[4860]: I1014 15:08:14.299818 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=43.299724288 podStartE2EDuration="43.299724288s" podCreationTimestamp="2025-10-14 15:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:14.295716971 +0000 UTC m=+1155.882500420" watchObservedRunningTime="2025-10-14 15:08:14.299724288 +0000 UTC m=+1155.886507737" Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.183231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" event={"ID":"9138e3ca-610f-4970-984b-626c6aab739d","Type":"ContainerStarted","Data":"87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21"} Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.183950 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.187581 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vbhtr" event={"ID":"517eb23f-ec49-4288-a019-df9ac4da8ccd","Type":"ContainerStarted","Data":"f4ce12fa8de1905f140d69ee5e0e673c0716c81bced5c946b1d6f2e423c6fe56"} Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.187635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vbhtr" event={"ID":"517eb23f-ec49-4288-a019-df9ac4da8ccd","Type":"ContainerStarted","Data":"f6dee0473d6428539a370857cd83b68a6bd27c5620a752fe06b44d421e3ae9fc"} Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.187726 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.187754 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.189429 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" event={"ID":"022b3ae4-c617-4666-afc6-874b561926f4","Type":"ContainerStarted","Data":"4e8439148aff48414919baa1772e64d247631d2f13c92e279003786c67f343ea"} Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.207288 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" podStartSLOduration=3.853517988 podStartE2EDuration="46.207266872s" podCreationTimestamp="2025-10-14 15:07:29 +0000 UTC" firstStartedPulling="2025-10-14 15:07:30.273062394 +0000 UTC m=+1111.859845833" lastFinishedPulling="2025-10-14 15:08:12.626811258 +0000 UTC m=+1154.213594717" observedRunningTime="2025-10-14 15:08:15.19761994 +0000 UTC m=+1156.784403409" watchObservedRunningTime="2025-10-14 15:08:15.207266872 +0000 UTC m=+1156.794050321" Oct 14 15:08:15 crc kubenswrapper[4860]: I1014 15:08:15.225236 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vbhtr" podStartSLOduration=22.564660296 podStartE2EDuration="37.225216536s" podCreationTimestamp="2025-10-14 15:07:38 +0000 UTC" firstStartedPulling="2025-10-14 15:07:57.47042395 +0000 UTC m=+1139.057207399" lastFinishedPulling="2025-10-14 15:08:12.13098019 +0000 UTC m=+1153.717763639" observedRunningTime="2025-10-14 15:08:15.219541799 +0000 UTC m=+1156.806325248" watchObservedRunningTime="2025-10-14 15:08:15.225216536 +0000 UTC m=+1156.811999975" Oct 14 15:08:18 crc kubenswrapper[4860]: I1014 15:08:18.811184 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:08:19 crc kubenswrapper[4860]: I1014 15:08:19.531291 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:08:19 crc kubenswrapper[4860]: I1014 15:08:19.552342 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" podStartSLOduration=8.877853813 podStartE2EDuration="51.552314623s" podCreationTimestamp="2025-10-14 15:07:28 +0000 UTC" firstStartedPulling="2025-10-14 15:07:29.606587869 +0000 UTC m=+1111.193371318" lastFinishedPulling="2025-10-14 15:08:12.281048679 +0000 UTC m=+1153.867832128" observedRunningTime="2025-10-14 15:08:15.234759077 +0000 UTC m=+1156.821542536" watchObservedRunningTime="2025-10-14 15:08:19.552314623 +0000 UTC m=+1161.139098092" Oct 14 15:08:19 crc kubenswrapper[4860]: I1014 15:08:19.615340 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-znf2c"] Oct 14 15:08:19 crc kubenswrapper[4860]: I1014 15:08:19.615559 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="dnsmasq-dns" containerID="cri-o://4e8439148aff48414919baa1772e64d247631d2f13c92e279003786c67f343ea" gracePeriod=10 Oct 14 15:08:19 crc kubenswrapper[4860]: I1014 15:08:19.622130 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:08:21 crc kubenswrapper[4860]: I1014 15:08:21.235474 4860 generic.go:334] "Generic (PLEG): container finished" podID="022b3ae4-c617-4666-afc6-874b561926f4" containerID="4e8439148aff48414919baa1772e64d247631d2f13c92e279003786c67f343ea" exitCode=0 Oct 14 15:08:21 crc kubenswrapper[4860]: I1014 15:08:21.235569 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" event={"ID":"022b3ae4-c617-4666-afc6-874b561926f4","Type":"ContainerDied","Data":"4e8439148aff48414919baa1772e64d247631d2f13c92e279003786c67f343ea"} Oct 14 15:08:21 crc kubenswrapper[4860]: I1014 15:08:21.563959 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 14 15:08:21 crc kubenswrapper[4860]: I1014 15:08:21.564004 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 14 15:08:23 crc kubenswrapper[4860]: I1014 15:08:23.297949 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 14 15:08:23 crc kubenswrapper[4860]: I1014 15:08:23.298541 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 14 15:08:23 crc kubenswrapper[4860]: I1014 15:08:23.811610 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.96:5353: connect: connection refused" Oct 14 15:08:24 crc kubenswrapper[4860]: I1014 15:08:24.927735 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.088952 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.089174 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbgxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(ac3dbbff-ef4c-461d-b2a0-58284b598cb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.090464 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="ac3dbbff-ef4c-461d-b2a0-58284b598cb4" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.109627 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.109802 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b44dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(9ea3e827-d3d5-481d-b8f6-90b20be97f2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.111109 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="9ea3e827-d3d5-481d-b8f6-90b20be97f2e" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.267785 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" event={"ID":"022b3ae4-c617-4666-afc6-874b561926f4","Type":"ContainerDied","Data":"10c0befd40d7296496e78a85c848a7ab8800f731c2d0573e15c03ef91bdcd462"} Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.268149 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c0befd40d7296496e78a85c848a7ab8800f731c2d0573e15c03ef91bdcd462" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.269734 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="9ea3e827-d3d5-481d-b8f6-90b20be97f2e" Oct 14 15:08:25 crc kubenswrapper[4860]: E1014 15:08:25.270170 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="ac3dbbff-ef4c-461d-b2a0-58284b598cb4" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.316294 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.482357 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-dns-svc\") pod \"022b3ae4-c617-4666-afc6-874b561926f4\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.482405 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-config\") pod \"022b3ae4-c617-4666-afc6-874b561926f4\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.482453 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lst8f\" (UniqueName: \"kubernetes.io/projected/022b3ae4-c617-4666-afc6-874b561926f4-kube-api-access-lst8f\") pod \"022b3ae4-c617-4666-afc6-874b561926f4\" (UID: \"022b3ae4-c617-4666-afc6-874b561926f4\") " Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.494767 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022b3ae4-c617-4666-afc6-874b561926f4-kube-api-access-lst8f" (OuterVolumeSpecName: "kube-api-access-lst8f") pod "022b3ae4-c617-4666-afc6-874b561926f4" (UID: "022b3ae4-c617-4666-afc6-874b561926f4"). InnerVolumeSpecName "kube-api-access-lst8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.519085 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-config" (OuterVolumeSpecName: "config") pod "022b3ae4-c617-4666-afc6-874b561926f4" (UID: "022b3ae4-c617-4666-afc6-874b561926f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.526381 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "022b3ae4-c617-4666-afc6-874b561926f4" (UID: "022b3ae4-c617-4666-afc6-874b561926f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.584402 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.584441 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022b3ae4-c617-4666-afc6-874b561926f4-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:25 crc kubenswrapper[4860]: I1014 15:08:25.584451 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lst8f\" (UniqueName: \"kubernetes.io/projected/022b3ae4-c617-4666-afc6-874b561926f4-kube-api-access-lst8f\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:26 crc kubenswrapper[4860]: I1014 15:08:26.275483 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-znf2c" Oct 14 15:08:26 crc kubenswrapper[4860]: I1014 15:08:26.316270 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-znf2c"] Oct 14 15:08:26 crc kubenswrapper[4860]: I1014 15:08:26.321957 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-znf2c"] Oct 14 15:08:26 crc kubenswrapper[4860]: I1014 15:08:26.928946 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.000129 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0619b1f4-ea36-41ab-a97b-2a97d516e53c" containerName="galera" probeResult="failure" output=< Oct 14 15:08:27 crc kubenswrapper[4860]: wsrep_local_state_comment (Joined) differs from Synced Oct 14 15:08:27 crc kubenswrapper[4860]: > Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.071456 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022b3ae4-c617-4666-afc6-874b561926f4" path="/var/lib/kubelet/pods/022b3ae4-c617-4666-afc6-874b561926f4/volumes" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.142152 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 14 15:08:27 crc kubenswrapper[4860]: E1014 15:08:27.144297 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="9ea3e827-d3d5-481d-b8f6-90b20be97f2e" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.188295 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.284857 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"03a44669-ea47-471b-a369-93f6f85bec6b","Type":"ContainerStarted","Data":"339af2a9d013a59f8a28435cec1cdd638aea798e2d519e82f0d807b34db8cc48"} Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.285228 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 14 15:08:27 crc kubenswrapper[4860]: E1014 15:08:27.285352 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="9ea3e827-d3d5-481d-b8f6-90b20be97f2e" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.285461 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.305052 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.395277963 podStartE2EDuration="54.305037729s" podCreationTimestamp="2025-10-14 15:07:33 +0000 UTC" firstStartedPulling="2025-10-14 15:07:56.220486727 +0000 UTC m=+1137.807270176" lastFinishedPulling="2025-10-14 15:08:26.130246493 +0000 UTC m=+1167.717029942" observedRunningTime="2025-10-14 15:08:27.304146227 +0000 UTC m=+1168.890929676" watchObservedRunningTime="2025-10-14 15:08:27.305037729 +0000 UTC m=+1168.891821178" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.323358 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.393102 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.393161 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 14 15:08:27 crc kubenswrapper[4860]: E1014 15:08:27.394867 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="ac3dbbff-ef4c-461d-b2a0-58284b598cb4" Oct 14 15:08:27 crc kubenswrapper[4860]: I1014 15:08:27.438823 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 14 15:08:28 crc kubenswrapper[4860]: E1014 15:08:28.292504 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="ac3dbbff-ef4c-461d-b2a0-58284b598cb4" Oct 14 15:08:28 crc kubenswrapper[4860]: E1014 15:08:28.292536 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="9ea3e827-d3d5-481d-b8f6-90b20be97f2e" Oct 14 15:08:28 crc kubenswrapper[4860]: I1014 15:08:28.334067 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 14 15:08:29 crc kubenswrapper[4860]: I1014 15:08:29.298003 4860 generic.go:334] "Generic (PLEG): container finished" podID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerID="527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a" exitCode=0 Oct 14 15:08:29 crc kubenswrapper[4860]: I1014 15:08:29.298203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d1afb1fa-9423-4ef6-a771-76c666ca1038","Type":"ContainerDied","Data":"527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a"} Oct 14 15:08:29 crc kubenswrapper[4860]: I1014 15:08:29.303156 4860 generic.go:334] "Generic (PLEG): container finished" podID="90824b73-8623-495c-8bed-fdc67bff987a" containerID="7edec3391beba6df09d7c2f4269589118180147c8d551fdd5fca8075f23ccbba" exitCode=0 Oct 14 15:08:29 crc kubenswrapper[4860]: I1014 15:08:29.303216 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90824b73-8623-495c-8bed-fdc67bff987a","Type":"ContainerDied","Data":"7edec3391beba6df09d7c2f4269589118180147c8d551fdd5fca8075f23ccbba"} Oct 14 15:08:29 crc kubenswrapper[4860]: E1014 15:08:29.312450 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="ac3dbbff-ef4c-461d-b2a0-58284b598cb4" Oct 14 15:08:29 crc kubenswrapper[4860]: E1014 15:08:29.312687 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="9ea3e827-d3d5-481d-b8f6-90b20be97f2e" Oct 14 15:08:29 crc kubenswrapper[4860]: I1014 15:08:29.396663 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 14 15:08:29 crc kubenswrapper[4860]: I1014 15:08:29.506955 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 14 15:08:30 crc kubenswrapper[4860]: I1014 15:08:30.322587 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d1afb1fa-9423-4ef6-a771-76c666ca1038","Type":"ContainerStarted","Data":"05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f"} Oct 14 15:08:30 crc kubenswrapper[4860]: I1014 15:08:30.322782 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:08:30 crc kubenswrapper[4860]: I1014 15:08:30.328951 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90824b73-8623-495c-8bed-fdc67bff987a","Type":"ContainerStarted","Data":"70591ac01a99688424b0ae9e6880f46c2db6588cc2fa664e4270a59c3f8df7ee"} Oct 14 15:08:30 crc kubenswrapper[4860]: I1014 15:08:30.329253 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 15:08:30 crc kubenswrapper[4860]: I1014 15:08:30.494750 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.302052461 podStartE2EDuration="1m1.494731544s" podCreationTimestamp="2025-10-14 15:07:29 +0000 UTC" firstStartedPulling="2025-10-14 15:07:31.339820058 +0000 UTC m=+1112.926603497" lastFinishedPulling="2025-10-14 15:07:55.532499111 +0000 UTC m=+1137.119282580" observedRunningTime="2025-10-14 15:08:30.37669614 +0000 UTC m=+1171.963479589" watchObservedRunningTime="2025-10-14 15:08:30.494731544 +0000 UTC m=+1172.081514993" Oct 14 15:08:31 crc kubenswrapper[4860]: I1014 15:08:31.616662 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 14 15:08:31 crc kubenswrapper[4860]: I1014 15:08:31.636100 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.99608778 podStartE2EDuration="1m3.636081781s" podCreationTimestamp="2025-10-14 15:07:28 +0000 UTC" firstStartedPulling="2025-10-14 15:07:30.791804676 +0000 UTC m=+1112.378588125" lastFinishedPulling="2025-10-14 15:07:54.431798667 +0000 UTC m=+1136.018582126" observedRunningTime="2025-10-14 15:08:30.495777819 +0000 UTC m=+1172.082561268" watchObservedRunningTime="2025-10-14 15:08:31.636081781 +0000 UTC m=+1173.222865230" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.051671 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dp25v"] Oct 14 15:08:33 crc kubenswrapper[4860]: E1014 15:08:33.053077 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="init" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.053178 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="init" Oct 14 15:08:33 crc kubenswrapper[4860]: E1014 15:08:33.053244 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="dnsmasq-dns" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.053307 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="dnsmasq-dns" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.053518 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="022b3ae4-c617-4666-afc6-874b561926f4" containerName="dnsmasq-dns" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.054219 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.112475 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dp25v"] Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.209513 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8n7w\" (UniqueName: \"kubernetes.io/projected/f85cdd63-2581-4545-8740-029dbf61a67a-kube-api-access-w8n7w\") pod \"keystone-db-create-dp25v\" (UID: \"f85cdd63-2581-4545-8740-029dbf61a67a\") " pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.309311 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7n45x"] Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.310292 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7n45x" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.311513 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8n7w\" (UniqueName: \"kubernetes.io/projected/f85cdd63-2581-4545-8740-029dbf61a67a-kube-api-access-w8n7w\") pod \"keystone-db-create-dp25v\" (UID: \"f85cdd63-2581-4545-8740-029dbf61a67a\") " pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.331158 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7n45x"] Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.376700 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8n7w\" (UniqueName: \"kubernetes.io/projected/f85cdd63-2581-4545-8740-029dbf61a67a-kube-api-access-w8n7w\") pod \"keystone-db-create-dp25v\" (UID: \"f85cdd63-2581-4545-8740-029dbf61a67a\") " pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.413043 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk72f\" (UniqueName: \"kubernetes.io/projected/48903075-196c-4f29-8246-9e1a3ed97181-kube-api-access-kk72f\") pod \"placement-db-create-7n45x\" (UID: \"48903075-196c-4f29-8246-9e1a3ed97181\") " pod="openstack/placement-db-create-7n45x" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.422208 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.514550 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk72f\" (UniqueName: \"kubernetes.io/projected/48903075-196c-4f29-8246-9e1a3ed97181-kube-api-access-kk72f\") pod \"placement-db-create-7n45x\" (UID: \"48903075-196c-4f29-8246-9e1a3ed97181\") " pod="openstack/placement-db-create-7n45x" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.535300 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk72f\" (UniqueName: \"kubernetes.io/projected/48903075-196c-4f29-8246-9e1a3ed97181-kube-api-access-kk72f\") pod \"placement-db-create-7n45x\" (UID: \"48903075-196c-4f29-8246-9e1a3ed97181\") " pod="openstack/placement-db-create-7n45x" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.615837 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-258hf"] Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.617149 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-258hf" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.632147 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7n45x" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.675636 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.677593 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-258hf"] Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.718251 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/d628649b-8b7c-44e8-b047-f00611d715d3-kube-api-access-qqh8z\") pod \"glance-db-create-258hf\" (UID: \"d628649b-8b7c-44e8-b047-f00611d715d3\") " pod="openstack/glance-db-create-258hf" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.821411 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/d628649b-8b7c-44e8-b047-f00611d715d3-kube-api-access-qqh8z\") pod \"glance-db-create-258hf\" (UID: \"d628649b-8b7c-44e8-b047-f00611d715d3\") " pod="openstack/glance-db-create-258hf" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.841831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/d628649b-8b7c-44e8-b047-f00611d715d3-kube-api-access-qqh8z\") pod \"glance-db-create-258hf\" (UID: \"d628649b-8b7c-44e8-b047-f00611d715d3\") " pod="openstack/glance-db-create-258hf" Oct 14 15:08:33 crc kubenswrapper[4860]: I1014 15:08:33.946983 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-258hf" Oct 14 15:08:34 crc kubenswrapper[4860]: I1014 15:08:34.241333 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7n45x"] Oct 14 15:08:34 crc kubenswrapper[4860]: W1014 15:08:34.259931 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48903075_196c_4f29_8246_9e1a3ed97181.slice/crio-6ffeb434c6f8004878ad8a0a8375aacb2e4a6bfa0f0f578682858172ac2d37de WatchSource:0}: Error finding container 6ffeb434c6f8004878ad8a0a8375aacb2e4a6bfa0f0f578682858172ac2d37de: Status 404 returned error can't find the container with id 6ffeb434c6f8004878ad8a0a8375aacb2e4a6bfa0f0f578682858172ac2d37de Oct 14 15:08:34 crc kubenswrapper[4860]: I1014 15:08:34.331053 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-258hf"] Oct 14 15:08:34 crc kubenswrapper[4860]: W1014 15:08:34.342821 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd628649b_8b7c_44e8_b047_f00611d715d3.slice/crio-c978bc313cacc1d062784bd43aba8d442c23590d9db599ded73115ac39fe3be8 WatchSource:0}: Error finding container c978bc313cacc1d062784bd43aba8d442c23590d9db599ded73115ac39fe3be8: Status 404 returned error can't find the container with id c978bc313cacc1d062784bd43aba8d442c23590d9db599ded73115ac39fe3be8 Oct 14 15:08:34 crc kubenswrapper[4860]: I1014 15:08:34.349949 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dp25v"] Oct 14 15:08:34 crc kubenswrapper[4860]: W1014 15:08:34.355344 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85cdd63_2581_4545_8740_029dbf61a67a.slice/crio-2b1a401eea0c842305471d9f0324e305dd410d0ef4d8e40ab45e49018af37e71 WatchSource:0}: Error finding container 2b1a401eea0c842305471d9f0324e305dd410d0ef4d8e40ab45e49018af37e71: Status 404 returned error can't find the container with id 2b1a401eea0c842305471d9f0324e305dd410d0ef4d8e40ab45e49018af37e71 Oct 14 15:08:34 crc kubenswrapper[4860]: I1014 15:08:34.379324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7n45x" event={"ID":"48903075-196c-4f29-8246-9e1a3ed97181","Type":"ContainerStarted","Data":"6ffeb434c6f8004878ad8a0a8375aacb2e4a6bfa0f0f578682858172ac2d37de"} Oct 14 15:08:34 crc kubenswrapper[4860]: I1014 15:08:34.380716 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-258hf" event={"ID":"d628649b-8b7c-44e8-b047-f00611d715d3","Type":"ContainerStarted","Data":"c978bc313cacc1d062784bd43aba8d442c23590d9db599ded73115ac39fe3be8"} Oct 14 15:08:34 crc kubenswrapper[4860]: I1014 15:08:34.382136 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dp25v" event={"ID":"f85cdd63-2581-4545-8740-029dbf61a67a","Type":"ContainerStarted","Data":"2b1a401eea0c842305471d9f0324e305dd410d0ef4d8e40ab45e49018af37e71"} Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.018066 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-9r9x9"] Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.019395 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.039972 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-9r9x9"] Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.141824 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-config\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.142134 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.142309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8dg\" (UniqueName: \"kubernetes.io/projected/dc344801-8b18-476c-a055-b7194de3bc7b-kube-api-access-5l8dg\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.243468 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.243523 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l8dg\" (UniqueName: \"kubernetes.io/projected/dc344801-8b18-476c-a055-b7194de3bc7b-kube-api-access-5l8dg\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.243570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-config\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.244510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-config\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.244582 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.284086 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l8dg\" (UniqueName: \"kubernetes.io/projected/dc344801-8b18-476c-a055-b7194de3bc7b-kube-api-access-5l8dg\") pod \"dnsmasq-dns-7cb5889db5-9r9x9\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.335092 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.406417 4860 generic.go:334] "Generic (PLEG): container finished" podID="48903075-196c-4f29-8246-9e1a3ed97181" containerID="aaa6299e8b48a1795d7f5674299f6283d9d2d00de1682dd93e75ee818a2af147" exitCode=0 Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.406804 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7n45x" event={"ID":"48903075-196c-4f29-8246-9e1a3ed97181","Type":"ContainerDied","Data":"aaa6299e8b48a1795d7f5674299f6283d9d2d00de1682dd93e75ee818a2af147"} Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.408462 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-258hf" event={"ID":"d628649b-8b7c-44e8-b047-f00611d715d3","Type":"ContainerStarted","Data":"ff96fb9146648fe633755c2438c955e5f50d7c147528254d5bb41cb10da9bb91"} Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.420697 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dp25v" event={"ID":"f85cdd63-2581-4545-8740-029dbf61a67a","Type":"ContainerStarted","Data":"93e481fe258c101046812b948a7a17acc1b8c1ff0071c5e5dcc93a1b3a0b8bed"} Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.490722 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-dp25v" podStartSLOduration=2.490704223 podStartE2EDuration="2.490704223s" podCreationTimestamp="2025-10-14 15:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:35.457554122 +0000 UTC m=+1177.044337571" watchObservedRunningTime="2025-10-14 15:08:35.490704223 +0000 UTC m=+1177.077487682" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.735159 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-258hf" podStartSLOduration=2.735137274 podStartE2EDuration="2.735137274s" podCreationTimestamp="2025-10-14 15:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:35.49139674 +0000 UTC m=+1177.078180189" watchObservedRunningTime="2025-10-14 15:08:35.735137274 +0000 UTC m=+1177.321920723" Oct 14 15:08:35 crc kubenswrapper[4860]: I1014 15:08:35.741674 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-9r9x9"] Oct 14 15:08:35 crc kubenswrapper[4860]: W1014 15:08:35.749249 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc344801_8b18_476c_a055_b7194de3bc7b.slice/crio-75f886300c32b51ba2c13ab55f15d3566dc3f38aa7e6930e209246d9af6fae5e WatchSource:0}: Error finding container 75f886300c32b51ba2c13ab55f15d3566dc3f38aa7e6930e209246d9af6fae5e: Status 404 returned error can't find the container with id 75f886300c32b51ba2c13ab55f15d3566dc3f38aa7e6930e209246d9af6fae5e Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.254819 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.260897 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.266866 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.266940 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.267118 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.267114 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dmkrd" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.300377 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.352255 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ntw94"] Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.353495 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.356162 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.357164 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.357548 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.392395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-cache\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.392460 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.392486 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.392697 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-lock\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.392970 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8kf\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-kube-api-access-7x8kf\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.410540 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ntw94"] Oct 14 15:08:36 crc kubenswrapper[4860]: E1014 15:08:36.417136 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-t4djz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-t4djz ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-ntw94" podUID="4e0fcad4-c6e1-4913-88d8-06ddda674aa7" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.437796 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-b2bd2"] Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.438803 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.445448 4860 generic.go:334] "Generic (PLEG): container finished" podID="d628649b-8b7c-44e8-b047-f00611d715d3" containerID="ff96fb9146648fe633755c2438c955e5f50d7c147528254d5bb41cb10da9bb91" exitCode=0 Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.445516 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-258hf" event={"ID":"d628649b-8b7c-44e8-b047-f00611d715d3","Type":"ContainerDied","Data":"ff96fb9146648fe633755c2438c955e5f50d7c147528254d5bb41cb10da9bb91"} Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.447228 4860 generic.go:334] "Generic (PLEG): container finished" podID="dc344801-8b18-476c-a055-b7194de3bc7b" containerID="625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584" exitCode=0 Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.447304 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" event={"ID":"dc344801-8b18-476c-a055-b7194de3bc7b","Type":"ContainerDied","Data":"625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584"} Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.447325 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" event={"ID":"dc344801-8b18-476c-a055-b7194de3bc7b","Type":"ContainerStarted","Data":"75f886300c32b51ba2c13ab55f15d3566dc3f38aa7e6930e209246d9af6fae5e"} Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.466198 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b2bd2"] Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.485155 4860 generic.go:334] "Generic (PLEG): container finished" podID="f85cdd63-2581-4545-8740-029dbf61a67a" containerID="93e481fe258c101046812b948a7a17acc1b8c1ff0071c5e5dcc93a1b3a0b8bed" exitCode=0 Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.485227 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.485796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dp25v" event={"ID":"f85cdd63-2581-4545-8740-029dbf61a67a","Type":"ContainerDied","Data":"93e481fe258c101046812b948a7a17acc1b8c1ff0071c5e5dcc93a1b3a0b8bed"} Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.495947 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.495990 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-swiftconf\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.496013 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.496053 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-lock\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.496081 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-scripts\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: E1014 15:08:36.496200 4860 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 15:08:36 crc kubenswrapper[4860]: E1014 15:08:36.496213 4860 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.496496 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.496624 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-lock\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: E1014 15:08:36.496691 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift podName:e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a nodeName:}" failed. No retries permitted until 2025-10-14 15:08:36.996671017 +0000 UTC m=+1178.583454546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift") pod "swift-storage-0" (UID: "e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a") : configmap "swift-ring-files" not found Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497144 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-dispersionconf\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497182 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-combined-ca-bundle\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497215 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-etc-swift\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497242 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4djz\" (UniqueName: \"kubernetes.io/projected/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-kube-api-access-t4djz\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497266 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-ring-data-devices\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497290 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8kf\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-kube-api-access-7x8kf\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497437 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-cache\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.497718 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-cache\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.557151 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ntw94"] Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.561127 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8kf\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-kube-api-access-7x8kf\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.596283 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.600393 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7c2\" (UniqueName: \"kubernetes.io/projected/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-kube-api-access-cc7c2\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.600457 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-swiftconf\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.600490 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-etc-swift\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-scripts\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603145 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-dispersionconf\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603181 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-scripts\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603195 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-ring-data-devices\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-dispersionconf\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603269 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-combined-ca-bundle\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603296 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-combined-ca-bundle\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603345 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-etc-swift\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603395 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4djz\" (UniqueName: \"kubernetes.io/projected/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-kube-api-access-t4djz\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603415 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-swiftconf\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.603446 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-ring-data-devices\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.604413 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-swiftconf\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.604952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-scripts\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.605833 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-etc-swift\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.607512 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-ring-data-devices\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.612625 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-dispersionconf\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.638821 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4djz\" (UniqueName: \"kubernetes.io/projected/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-kube-api-access-t4djz\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.641812 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-combined-ca-bundle\") pod \"swift-ring-rebalance-ntw94\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.653930 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.705794 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-swiftconf\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.705869 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7c2\" (UniqueName: \"kubernetes.io/projected/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-kube-api-access-cc7c2\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.705906 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-etc-swift\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.705939 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-dispersionconf\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.705960 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-scripts\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.706089 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-ring-data-devices\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.706124 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-combined-ca-bundle\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.707067 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-etc-swift\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.707613 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-scripts\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.711459 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-ring-data-devices\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.714689 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-combined-ca-bundle\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.719352 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-swiftconf\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.719563 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-dispersionconf\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.726563 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7c2\" (UniqueName: \"kubernetes.io/projected/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-kube-api-access-cc7c2\") pod \"swift-ring-rebalance-b2bd2\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.757421 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.808275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-swiftconf\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.809361 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-scripts\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.809400 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-ring-data-devices\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.809430 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-combined-ca-bundle\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.809477 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4djz\" (UniqueName: \"kubernetes.io/projected/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-kube-api-access-t4djz\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.809539 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-etc-swift\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.809631 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-dispersionconf\") pod \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\" (UID: \"4e0fcad4-c6e1-4913-88d8-06ddda674aa7\") " Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.811058 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.814077 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-scripts" (OuterVolumeSpecName: "scripts") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.814516 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-kube-api-access-t4djz" (OuterVolumeSpecName: "kube-api-access-t4djz") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "kube-api-access-t4djz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.814534 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.814607 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.830713 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.833142 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4e0fcad4-c6e1-4913-88d8-06ddda674aa7" (UID: "4e0fcad4-c6e1-4913-88d8-06ddda674aa7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.871600 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7n45x" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914009 4860 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914053 4860 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914064 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914072 4860 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914081 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914090 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4djz\" (UniqueName: \"kubernetes.io/projected/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-kube-api-access-t4djz\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:36 crc kubenswrapper[4860]: I1014 15:08:36.914097 4860 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e0fcad4-c6e1-4913-88d8-06ddda674aa7-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.014662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk72f\" (UniqueName: \"kubernetes.io/projected/48903075-196c-4f29-8246-9e1a3ed97181-kube-api-access-kk72f\") pod \"48903075-196c-4f29-8246-9e1a3ed97181\" (UID: \"48903075-196c-4f29-8246-9e1a3ed97181\") " Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.014983 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:37 crc kubenswrapper[4860]: E1014 15:08:37.015205 4860 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 15:08:37 crc kubenswrapper[4860]: E1014 15:08:37.015237 4860 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 15:08:37 crc kubenswrapper[4860]: E1014 15:08:37.015305 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift podName:e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a nodeName:}" failed. No retries permitted until 2025-10-14 15:08:38.015277096 +0000 UTC m=+1179.602060545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift") pod "swift-storage-0" (UID: "e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a") : configmap "swift-ring-files" not found Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.021696 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48903075-196c-4f29-8246-9e1a3ed97181-kube-api-access-kk72f" (OuterVolumeSpecName: "kube-api-access-kk72f") pod "48903075-196c-4f29-8246-9e1a3ed97181" (UID: "48903075-196c-4f29-8246-9e1a3ed97181"). InnerVolumeSpecName "kube-api-access-kk72f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.116907 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk72f\" (UniqueName: \"kubernetes.io/projected/48903075-196c-4f29-8246-9e1a3ed97181-kube-api-access-kk72f\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.286584 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b2bd2"] Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.492324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b2bd2" event={"ID":"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd","Type":"ContainerStarted","Data":"f9bcfe6154f381448458709d6dc035f3c1ef03c758db344a18c6474fa9e9ed36"} Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.494570 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7n45x" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.494567 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7n45x" event={"ID":"48903075-196c-4f29-8246-9e1a3ed97181","Type":"ContainerDied","Data":"6ffeb434c6f8004878ad8a0a8375aacb2e4a6bfa0f0f578682858172ac2d37de"} Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.494708 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffeb434c6f8004878ad8a0a8375aacb2e4a6bfa0f0f578682858172ac2d37de" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.496612 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" event={"ID":"dc344801-8b18-476c-a055-b7194de3bc7b","Type":"ContainerStarted","Data":"8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110"} Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.496626 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ntw94" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.519309 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" podStartSLOduration=3.519291173 podStartE2EDuration="3.519291173s" podCreationTimestamp="2025-10-14 15:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:37.517161462 +0000 UTC m=+1179.103944921" watchObservedRunningTime="2025-10-14 15:08:37.519291173 +0000 UTC m=+1179.106074622" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.573090 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ntw94"] Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.579528 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-ntw94"] Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.875240 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-258hf" Oct 14 15:08:37 crc kubenswrapper[4860]: I1014 15:08:37.977784 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.040941 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/d628649b-8b7c-44e8-b047-f00611d715d3-kube-api-access-qqh8z\") pod \"d628649b-8b7c-44e8-b047-f00611d715d3\" (UID: \"d628649b-8b7c-44e8-b047-f00611d715d3\") " Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.041326 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:38 crc kubenswrapper[4860]: E1014 15:08:38.041573 4860 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 15:08:38 crc kubenswrapper[4860]: E1014 15:08:38.041588 4860 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 15:08:38 crc kubenswrapper[4860]: E1014 15:08:38.041634 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift podName:e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a nodeName:}" failed. No retries permitted until 2025-10-14 15:08:40.041618273 +0000 UTC m=+1181.628401722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift") pod "swift-storage-0" (UID: "e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a") : configmap "swift-ring-files" not found Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.056945 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d628649b-8b7c-44e8-b047-f00611d715d3-kube-api-access-qqh8z" (OuterVolumeSpecName: "kube-api-access-qqh8z") pod "d628649b-8b7c-44e8-b047-f00611d715d3" (UID: "d628649b-8b7c-44e8-b047-f00611d715d3"). InnerVolumeSpecName "kube-api-access-qqh8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.142590 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8n7w\" (UniqueName: \"kubernetes.io/projected/f85cdd63-2581-4545-8740-029dbf61a67a-kube-api-access-w8n7w\") pod \"f85cdd63-2581-4545-8740-029dbf61a67a\" (UID: \"f85cdd63-2581-4545-8740-029dbf61a67a\") " Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.143144 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/d628649b-8b7c-44e8-b047-f00611d715d3-kube-api-access-qqh8z\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.146931 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85cdd63-2581-4545-8740-029dbf61a67a-kube-api-access-w8n7w" (OuterVolumeSpecName: "kube-api-access-w8n7w") pod "f85cdd63-2581-4545-8740-029dbf61a67a" (UID: "f85cdd63-2581-4545-8740-029dbf61a67a"). InnerVolumeSpecName "kube-api-access-w8n7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.244325 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8n7w\" (UniqueName: \"kubernetes.io/projected/f85cdd63-2581-4545-8740-029dbf61a67a-kube-api-access-w8n7w\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.504384 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-258hf" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.504909 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-258hf" event={"ID":"d628649b-8b7c-44e8-b047-f00611d715d3","Type":"ContainerDied","Data":"c978bc313cacc1d062784bd43aba8d442c23590d9db599ded73115ac39fe3be8"} Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.504933 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c978bc313cacc1d062784bd43aba8d442c23590d9db599ded73115ac39fe3be8" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.507481 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dp25v" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.507516 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dp25v" event={"ID":"f85cdd63-2581-4545-8740-029dbf61a67a","Type":"ContainerDied","Data":"2b1a401eea0c842305471d9f0324e305dd410d0ef4d8e40ab45e49018af37e71"} Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.507546 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1a401eea0c842305471d9f0324e305dd410d0ef4d8e40ab45e49018af37e71" Oct 14 15:08:38 crc kubenswrapper[4860]: I1014 15:08:38.507710 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:39 crc kubenswrapper[4860]: I1014 15:08:39.073173 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0fcad4-c6e1-4913-88d8-06ddda674aa7" path="/var/lib/kubelet/pods/4e0fcad4-c6e1-4913-88d8-06ddda674aa7/volumes" Oct 14 15:08:40 crc kubenswrapper[4860]: I1014 15:08:40.071705 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:40 crc kubenswrapper[4860]: E1014 15:08:40.071927 4860 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 15:08:40 crc kubenswrapper[4860]: E1014 15:08:40.071952 4860 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 15:08:40 crc kubenswrapper[4860]: E1014 15:08:40.072016 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift podName:e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a nodeName:}" failed. No retries permitted until 2025-10-14 15:08:44.071996046 +0000 UTC m=+1185.658779505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift") pod "swift-storage-0" (UID: "e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a") : configmap "swift-ring-files" not found Oct 14 15:08:40 crc kubenswrapper[4860]: I1014 15:08:40.112221 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Oct 14 15:08:40 crc kubenswrapper[4860]: I1014 15:08:40.580159 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.534298 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ac3dbbff-ef4c-461d-b2a0-58284b598cb4","Type":"ContainerStarted","Data":"77a73d63eb5b98b546cfebffaf48117273b8d36cd573f443173920cd724d8951"} Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.544608 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9ea3e827-d3d5-481d-b8f6-90b20be97f2e","Type":"ContainerStarted","Data":"11305d1f6a598e6dafa2532b2e3cf4bbed55880511b83e509feab3d1878237bd"} Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.567224 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.116722985 podStartE2EDuration="1m1.567209698s" podCreationTimestamp="2025-10-14 15:07:41 +0000 UTC" firstStartedPulling="2025-10-14 15:07:57.555817634 +0000 UTC m=+1139.142601083" lastFinishedPulling="2025-10-14 15:08:42.006304347 +0000 UTC m=+1183.593087796" observedRunningTime="2025-10-14 15:08:42.565593069 +0000 UTC m=+1184.152376518" watchObservedRunningTime="2025-10-14 15:08:42.567209698 +0000 UTC m=+1184.153993147" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.801320 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-9r9x9"] Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.802262 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" containerName="dnsmasq-dns" containerID="cri-o://8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110" gracePeriod=10 Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.803058 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.850760 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-vz7jg"] Oct 14 15:08:42 crc kubenswrapper[4860]: E1014 15:08:42.851095 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85cdd63-2581-4545-8740-029dbf61a67a" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.851112 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85cdd63-2581-4545-8740-029dbf61a67a" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: E1014 15:08:42.851121 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d628649b-8b7c-44e8-b047-f00611d715d3" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.851126 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d628649b-8b7c-44e8-b047-f00611d715d3" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: E1014 15:08:42.851142 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48903075-196c-4f29-8246-9e1a3ed97181" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.851149 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="48903075-196c-4f29-8246-9e1a3ed97181" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.851302 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85cdd63-2581-4545-8740-029dbf61a67a" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.851315 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="48903075-196c-4f29-8246-9e1a3ed97181" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.851334 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d628649b-8b7c-44e8-b047-f00611d715d3" containerName="mariadb-database-create" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.852110 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.860379 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.918224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-config\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.918352 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.918388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.918414 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdlqj\" (UniqueName: \"kubernetes.io/projected/32953d12-ab2e-48e0-8aeb-866e464d6ec4-kube-api-access-wdlqj\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.964893 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-vz7jg"] Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.983958 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-s4vnv"] Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.985226 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:42 crc kubenswrapper[4860]: I1014 15:08:42.991314 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.020017 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-config\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.020127 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.020152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.020172 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdlqj\" (UniqueName: \"kubernetes.io/projected/32953d12-ab2e-48e0-8aeb-866e464d6ec4-kube-api-access-wdlqj\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.021356 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-config\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.023374 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.024392 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.047477 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s4vnv"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.048837 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdlqj\" (UniqueName: \"kubernetes.io/projected/32953d12-ab2e-48e0-8aeb-866e464d6ec4-kube-api-access-wdlqj\") pod \"dnsmasq-dns-6c89d5d749-vz7jg\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.121497 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-ovn-rundir\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.121740 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-config\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.122328 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-ovs-rundir\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.122418 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frnx\" (UniqueName: \"kubernetes.io/projected/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-kube-api-access-9frnx\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.122450 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-combined-ca-bundle\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.122500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.173863 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.223878 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-ovs-rundir\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.223927 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frnx\" (UniqueName: \"kubernetes.io/projected/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-kube-api-access-9frnx\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.223948 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-combined-ca-bundle\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.223972 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.224007 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-ovn-rundir\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.224044 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-config\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.224767 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-config\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.224983 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-ovs-rundir\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.225279 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-ovn-rundir\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.231345 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-combined-ca-bundle\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.231564 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.248281 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frnx\" (UniqueName: \"kubernetes.io/projected/cb8d65af-6ce5-4a61-ad15-c32aeb71c190-kube-api-access-9frnx\") pod \"ovn-controller-metrics-s4vnv\" (UID: \"cb8d65af-6ce5-4a61-ad15-c32aeb71c190\") " pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.304220 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s4vnv" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.348017 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.427761 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l8dg\" (UniqueName: \"kubernetes.io/projected/dc344801-8b18-476c-a055-b7194de3bc7b-kube-api-access-5l8dg\") pod \"dc344801-8b18-476c-a055-b7194de3bc7b\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.428175 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-config\") pod \"dc344801-8b18-476c-a055-b7194de3bc7b\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.428249 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-dns-svc\") pod \"dc344801-8b18-476c-a055-b7194de3bc7b\" (UID: \"dc344801-8b18-476c-a055-b7194de3bc7b\") " Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.433574 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc344801-8b18-476c-a055-b7194de3bc7b-kube-api-access-5l8dg" (OuterVolumeSpecName: "kube-api-access-5l8dg") pod "dc344801-8b18-476c-a055-b7194de3bc7b" (UID: "dc344801-8b18-476c-a055-b7194de3bc7b"). InnerVolumeSpecName "kube-api-access-5l8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.494291 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc344801-8b18-476c-a055-b7194de3bc7b" (UID: "dc344801-8b18-476c-a055-b7194de3bc7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.497288 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-config" (OuterVolumeSpecName: "config") pod "dc344801-8b18-476c-a055-b7194de3bc7b" (UID: "dc344801-8b18-476c-a055-b7194de3bc7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.498668 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-vz7jg"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.531093 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.531121 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l8dg\" (UniqueName: \"kubernetes.io/projected/dc344801-8b18-476c-a055-b7194de3bc7b-kube-api-access-5l8dg\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.531131 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc344801-8b18-476c-a055-b7194de3bc7b-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.562262 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dedc-account-create-r4p7j"] Oct 14 15:08:43 crc kubenswrapper[4860]: E1014 15:08:43.562584 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" containerName="init" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.562596 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" containerName="init" Oct 14 15:08:43 crc kubenswrapper[4860]: E1014 15:08:43.562626 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" containerName="dnsmasq-dns" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.562632 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" containerName="dnsmasq-dns" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.562794 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" containerName="dnsmasq-dns" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.563312 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.563669 4860 generic.go:334] "Generic (PLEG): container finished" podID="dc344801-8b18-476c-a055-b7194de3bc7b" containerID="8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110" exitCode=0 Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.563757 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" event={"ID":"dc344801-8b18-476c-a055-b7194de3bc7b","Type":"ContainerDied","Data":"8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110"} Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.563782 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" event={"ID":"dc344801-8b18-476c-a055-b7194de3bc7b","Type":"ContainerDied","Data":"75f886300c32b51ba2c13ab55f15d3566dc3f38aa7e6930e209246d9af6fae5e"} Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.563801 4860 scope.go:117] "RemoveContainer" containerID="8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.564331 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-9r9x9" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.568272 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.573199 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-9c68c"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.574577 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.578227 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.586666 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dedc-account-create-r4p7j"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.588752 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b2bd2" event={"ID":"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd","Type":"ContainerStarted","Data":"d889ae149d75437610bee3220ea50edcb778a377ef7deb794b069109ba64d533"} Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.592710 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9c68c"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.628411 4860 scope.go:117] "RemoveContainer" containerID="625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.633983 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-config\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.634020 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfr9\" (UniqueName: \"kubernetes.io/projected/153ddd28-cece-4e22-956e-421b65491e15-kube-api-access-nnfr9\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.634061 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.634083 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-dns-svc\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.634137 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.634245 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t6p7\" (UniqueName: \"kubernetes.io/projected/b586fecc-3089-441e-8efa-8f84641f472b-kube-api-access-2t6p7\") pod \"placement-dedc-account-create-r4p7j\" (UID: \"b586fecc-3089-441e-8efa-8f84641f472b\") " pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.654240 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-b2bd2" podStartSLOduration=2.942826533 podStartE2EDuration="7.654214002s" podCreationTimestamp="2025-10-14 15:08:36 +0000 UTC" firstStartedPulling="2025-10-14 15:08:37.295900861 +0000 UTC m=+1178.882684310" lastFinishedPulling="2025-10-14 15:08:42.00728832 +0000 UTC m=+1183.594071779" observedRunningTime="2025-10-14 15:08:43.643848491 +0000 UTC m=+1185.230631940" watchObservedRunningTime="2025-10-14 15:08:43.654214002 +0000 UTC m=+1185.240997461" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.657207 4860 scope.go:117] "RemoveContainer" containerID="8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110" Oct 14 15:08:43 crc kubenswrapper[4860]: E1014 15:08:43.658437 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110\": container with ID starting with 8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110 not found: ID does not exist" containerID="8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.658462 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110"} err="failed to get container status \"8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110\": rpc error: code = NotFound desc = could not find container \"8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110\": container with ID starting with 8dcac9d16ee612e7bfa8a7959ec02ea1827d77f1edf78ac4b1cf04f84f464110 not found: ID does not exist" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.658488 4860 scope.go:117] "RemoveContainer" containerID="625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584" Oct 14 15:08:43 crc kubenswrapper[4860]: E1014 15:08:43.658656 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584\": container with ID starting with 625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584 not found: ID does not exist" containerID="625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.658672 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584"} err="failed to get container status \"625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584\": rpc error: code = NotFound desc = could not find container \"625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584\": container with ID starting with 625ab580cd3a97e0fae897632bd6968f1ae5bddbcc3f33e9e2e7e84a11b51584 not found: ID does not exist" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738608 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t6p7\" (UniqueName: \"kubernetes.io/projected/b586fecc-3089-441e-8efa-8f84641f472b-kube-api-access-2t6p7\") pod \"placement-dedc-account-create-r4p7j\" (UID: \"b586fecc-3089-441e-8efa-8f84641f472b\") " pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738687 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-config\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738706 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnfr9\" (UniqueName: \"kubernetes.io/projected/153ddd28-cece-4e22-956e-421b65491e15-kube-api-access-nnfr9\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738730 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738754 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-dns-svc\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738775 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.460978911 podStartE2EDuration="1m6.738754056s" podCreationTimestamp="2025-10-14 15:07:37 +0000 UTC" firstStartedPulling="2025-10-14 15:07:56.728863199 +0000 UTC m=+1138.315646638" lastFinishedPulling="2025-10-14 15:08:42.006638334 +0000 UTC m=+1183.593421783" observedRunningTime="2025-10-14 15:08:43.674496812 +0000 UTC m=+1185.261280261" watchObservedRunningTime="2025-10-14 15:08:43.738754056 +0000 UTC m=+1185.325537505" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.738837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.745210 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ed1d-account-create-hgzrl"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.746970 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.747698 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-config\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.749745 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.750639 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-dns-svc\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.754237 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.761214 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.775498 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t6p7\" (UniqueName: \"kubernetes.io/projected/b586fecc-3089-441e-8efa-8f84641f472b-kube-api-access-2t6p7\") pod \"placement-dedc-account-create-r4p7j\" (UID: \"b586fecc-3089-441e-8efa-8f84641f472b\") " pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.776273 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnfr9\" (UniqueName: \"kubernetes.io/projected/153ddd28-cece-4e22-956e-421b65491e15-kube-api-access-nnfr9\") pod \"dnsmasq-dns-698758b865-9c68c\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.785889 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-9r9x9"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.800316 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-9r9x9"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.803932 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sc6wm" podUID="8fbd86ca-1d38-4b27-bd36-62198c367b3d" containerName="ovn-controller" probeResult="failure" output=< Oct 14 15:08:43 crc kubenswrapper[4860]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 15:08:43 crc kubenswrapper[4860]: > Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.814227 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-vz7jg"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.820618 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ed1d-account-create-hgzrl"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.840083 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64kf\" (UniqueName: \"kubernetes.io/projected/1b69c0c4-56e5-4100-a195-9d29ebee6719-kube-api-access-t64kf\") pod \"glance-ed1d-account-create-hgzrl\" (UID: \"1b69c0c4-56e5-4100-a195-9d29ebee6719\") " pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.908279 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.921911 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.953000 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64kf\" (UniqueName: \"kubernetes.io/projected/1b69c0c4-56e5-4100-a195-9d29ebee6719-kube-api-access-t64kf\") pod \"glance-ed1d-account-create-hgzrl\" (UID: \"1b69c0c4-56e5-4100-a195-9d29ebee6719\") " pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.965068 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.966329 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.968257 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.968368 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9k4sz" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.968497 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 14 15:08:43 crc kubenswrapper[4860]: I1014 15:08:43.968256 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.002404 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64kf\" (UniqueName: \"kubernetes.io/projected/1b69c0c4-56e5-4100-a195-9d29ebee6719-kube-api-access-t64kf\") pod \"glance-ed1d-account-create-hgzrl\" (UID: \"1b69c0c4-56e5-4100-a195-9d29ebee6719\") " pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.005529 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054256 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054557 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9be28925-a379-4ecf-8021-5a16dbd9b666-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054666 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9be28925-a379-4ecf-8021-5a16dbd9b666-scripts\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054741 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054829 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbdb\" (UniqueName: \"kubernetes.io/projected/9be28925-a379-4ecf-8021-5a16dbd9b666-kube-api-access-pjbdb\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054907 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be28925-a379-4ecf-8021-5a16dbd9b666-config\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.054993 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.076294 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.100456 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s4vnv"] Oct 14 15:08:44 crc kubenswrapper[4860]: W1014 15:08:44.110940 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb8d65af_6ce5_4a61_ad15_c32aeb71c190.slice/crio-5a10bc2397b803d1cbed766660be5738f32e12fd22217944fa27da7c1ff120ba WatchSource:0}: Error finding container 5a10bc2397b803d1cbed766660be5738f32e12fd22217944fa27da7c1ff120ba: Status 404 returned error can't find the container with id 5a10bc2397b803d1cbed766660be5738f32e12fd22217944fa27da7c1ff120ba Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.162662 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9be28925-a379-4ecf-8021-5a16dbd9b666-scripts\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.162953 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.163909 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9be28925-a379-4ecf-8021-5a16dbd9b666-scripts\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.164294 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbdb\" (UniqueName: \"kubernetes.io/projected/9be28925-a379-4ecf-8021-5a16dbd9b666-kube-api-access-pjbdb\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.165925 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be28925-a379-4ecf-8021-5a16dbd9b666-config\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.166150 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.166385 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.166467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.166511 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9be28925-a379-4ecf-8021-5a16dbd9b666-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.166789 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be28925-a379-4ecf-8021-5a16dbd9b666-config\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.167946 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9be28925-a379-4ecf-8021-5a16dbd9b666-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: E1014 15:08:44.171131 4860 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 15:08:44 crc kubenswrapper[4860]: E1014 15:08:44.171161 4860 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 15:08:44 crc kubenswrapper[4860]: E1014 15:08:44.171224 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift podName:e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a nodeName:}" failed. No retries permitted until 2025-10-14 15:08:52.171208823 +0000 UTC m=+1193.757992272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift") pod "swift-storage-0" (UID: "e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a") : configmap "swift-ring-files" not found Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.177868 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.178567 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.183011 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be28925-a379-4ecf-8021-5a16dbd9b666-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.219106 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbdb\" (UniqueName: \"kubernetes.io/projected/9be28925-a379-4ecf-8021-5a16dbd9b666-kube-api-access-pjbdb\") pod \"ovn-northd-0\" (UID: \"9be28925-a379-4ecf-8021-5a16dbd9b666\") " pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.337906 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.594746 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s4vnv" event={"ID":"cb8d65af-6ce5-4a61-ad15-c32aeb71c190","Type":"ContainerStarted","Data":"5a10bc2397b803d1cbed766660be5738f32e12fd22217944fa27da7c1ff120ba"} Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.596508 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" event={"ID":"32953d12-ab2e-48e0-8aeb-866e464d6ec4","Type":"ContainerStarted","Data":"0b889f4362b24a97c78c4cb05ff49267bfd5299bb12cab6fad4dcc1b5b8e3b07"} Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.596539 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" event={"ID":"32953d12-ab2e-48e0-8aeb-866e464d6ec4","Type":"ContainerStarted","Data":"5dafac5238e6cce290c41c14cc054b4e161b1b699bc06c5360a3fdec7ef54958"} Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.736409 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dedc-account-create-r4p7j"] Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.778465 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9c68c"] Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.794124 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ed1d-account-create-hgzrl"] Oct 14 15:08:44 crc kubenswrapper[4860]: W1014 15:08:44.797155 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153ddd28_cece_4e22_956e_421b65491e15.slice/crio-1dec96f7d8d263a6c7d55f45f55c2b4a2486c82076ea7e0b3e8ebadd2e2e947f WatchSource:0}: Error finding container 1dec96f7d8d263a6c7d55f45f55c2b4a2486c82076ea7e0b3e8ebadd2e2e947f: Status 404 returned error can't find the container with id 1dec96f7d8d263a6c7d55f45f55c2b4a2486c82076ea7e0b3e8ebadd2e2e947f Oct 14 15:08:44 crc kubenswrapper[4860]: W1014 15:08:44.805121 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb586fecc_3089_441e_8efa_8f84641f472b.slice/crio-d131618bb9d4d6deea5e3f52bf0ae97c8ab0fa6002a7353c347b6a5561f1345f WatchSource:0}: Error finding container d131618bb9d4d6deea5e3f52bf0ae97c8ab0fa6002a7353c347b6a5561f1345f: Status 404 returned error can't find the container with id d131618bb9d4d6deea5e3f52bf0ae97c8ab0fa6002a7353c347b6a5561f1345f Oct 14 15:08:44 crc kubenswrapper[4860]: I1014 15:08:44.977270 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.072720 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc344801-8b18-476c-a055-b7194de3bc7b" path="/var/lib/kubelet/pods/dc344801-8b18-476c-a055-b7194de3bc7b/volumes" Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.605843 4860 generic.go:334] "Generic (PLEG): container finished" podID="32953d12-ab2e-48e0-8aeb-866e464d6ec4" containerID="0b889f4362b24a97c78c4cb05ff49267bfd5299bb12cab6fad4dcc1b5b8e3b07" exitCode=0 Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.606142 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" event={"ID":"32953d12-ab2e-48e0-8aeb-866e464d6ec4","Type":"ContainerDied","Data":"0b889f4362b24a97c78c4cb05ff49267bfd5299bb12cab6fad4dcc1b5b8e3b07"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.608359 4860 generic.go:334] "Generic (PLEG): container finished" podID="1b69c0c4-56e5-4100-a195-9d29ebee6719" containerID="efc97acb9d2b686089130b6db463734768abc1f07bea60efc08e0a4881dce350" exitCode=0 Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.608417 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed1d-account-create-hgzrl" event={"ID":"1b69c0c4-56e5-4100-a195-9d29ebee6719","Type":"ContainerDied","Data":"efc97acb9d2b686089130b6db463734768abc1f07bea60efc08e0a4881dce350"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.608441 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed1d-account-create-hgzrl" event={"ID":"1b69c0c4-56e5-4100-a195-9d29ebee6719","Type":"ContainerStarted","Data":"d71463ed502a48775190e0d2d42d9bcd8438b0e1c318eaa5c0fa49ef42e17863"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.609957 4860 generic.go:334] "Generic (PLEG): container finished" podID="b586fecc-3089-441e-8efa-8f84641f472b" containerID="2f8fc8fc070e7110d5d86fb7183d1aeeea38eff08813acbb6dfdce163df98caf" exitCode=0 Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.610017 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dedc-account-create-r4p7j" event={"ID":"b586fecc-3089-441e-8efa-8f84641f472b","Type":"ContainerDied","Data":"2f8fc8fc070e7110d5d86fb7183d1aeeea38eff08813acbb6dfdce163df98caf"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.610059 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dedc-account-create-r4p7j" event={"ID":"b586fecc-3089-441e-8efa-8f84641f472b","Type":"ContainerStarted","Data":"d131618bb9d4d6deea5e3f52bf0ae97c8ab0fa6002a7353c347b6a5561f1345f"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.617521 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s4vnv" event={"ID":"cb8d65af-6ce5-4a61-ad15-c32aeb71c190","Type":"ContainerStarted","Data":"dbc00fdea787851848695781e2c6afda50cf149f8ba91a95e90adab0c2f88f94"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.619890 4860 generic.go:334] "Generic (PLEG): container finished" podID="153ddd28-cece-4e22-956e-421b65491e15" containerID="f6b7078ecd48d961d854c2df6abcd6a7e258866f5315174634be1b689338bf81" exitCode=0 Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.619942 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9c68c" event={"ID":"153ddd28-cece-4e22-956e-421b65491e15","Type":"ContainerDied","Data":"f6b7078ecd48d961d854c2df6abcd6a7e258866f5315174634be1b689338bf81"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.619971 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9c68c" event={"ID":"153ddd28-cece-4e22-956e-421b65491e15","Type":"ContainerStarted","Data":"1dec96f7d8d263a6c7d55f45f55c2b4a2486c82076ea7e0b3e8ebadd2e2e947f"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.621356 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9be28925-a379-4ecf-8021-5a16dbd9b666","Type":"ContainerStarted","Data":"00f9ac3199475207321ee4cb2dde03c9eada2f8b533b0ca6ac0f7532206f5865"} Oct 14 15:08:45 crc kubenswrapper[4860]: I1014 15:08:45.985584 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.011284 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-config\") pod \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.011361 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdlqj\" (UniqueName: \"kubernetes.io/projected/32953d12-ab2e-48e0-8aeb-866e464d6ec4-kube-api-access-wdlqj\") pod \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.011392 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-ovsdbserver-sb\") pod \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.011444 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-dns-svc\") pod \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\" (UID: \"32953d12-ab2e-48e0-8aeb-866e464d6ec4\") " Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.011828 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-s4vnv" podStartSLOduration=4.011809217 podStartE2EDuration="4.011809217s" podCreationTimestamp="2025-10-14 15:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:45.729527391 +0000 UTC m=+1187.316310840" watchObservedRunningTime="2025-10-14 15:08:46.011809217 +0000 UTC m=+1187.598592666" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.024865 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32953d12-ab2e-48e0-8aeb-866e464d6ec4-kube-api-access-wdlqj" (OuterVolumeSpecName: "kube-api-access-wdlqj") pod "32953d12-ab2e-48e0-8aeb-866e464d6ec4" (UID: "32953d12-ab2e-48e0-8aeb-866e464d6ec4"). InnerVolumeSpecName "kube-api-access-wdlqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.036736 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32953d12-ab2e-48e0-8aeb-866e464d6ec4" (UID: "32953d12-ab2e-48e0-8aeb-866e464d6ec4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.037589 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32953d12-ab2e-48e0-8aeb-866e464d6ec4" (UID: "32953d12-ab2e-48e0-8aeb-866e464d6ec4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.041042 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-config" (OuterVolumeSpecName: "config") pod "32953d12-ab2e-48e0-8aeb-866e464d6ec4" (UID: "32953d12-ab2e-48e0-8aeb-866e464d6ec4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.114560 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdlqj\" (UniqueName: \"kubernetes.io/projected/32953d12-ab2e-48e0-8aeb-866e464d6ec4-kube-api-access-wdlqj\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.115715 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.115743 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.115755 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32953d12-ab2e-48e0-8aeb-866e464d6ec4-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.631926 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9c68c" event={"ID":"153ddd28-cece-4e22-956e-421b65491e15","Type":"ContainerStarted","Data":"1031868bea866a6c4c6c7e94d889d9ef722fef5da8df51ef1f86216bb5c64fec"} Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.632084 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.635909 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" event={"ID":"32953d12-ab2e-48e0-8aeb-866e464d6ec4","Type":"ContainerDied","Data":"5dafac5238e6cce290c41c14cc054b4e161b1b699bc06c5360a3fdec7ef54958"} Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.635973 4860 scope.go:117] "RemoveContainer" containerID="0b889f4362b24a97c78c4cb05ff49267bfd5299bb12cab6fad4dcc1b5b8e3b07" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.636568 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-vz7jg" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.655444 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-9c68c" podStartSLOduration=3.655427379 podStartE2EDuration="3.655427379s" podCreationTimestamp="2025-10-14 15:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:46.653585745 +0000 UTC m=+1188.240369204" watchObservedRunningTime="2025-10-14 15:08:46.655427379 +0000 UTC m=+1188.242210838" Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.703167 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-vz7jg"] Oct 14 15:08:46 crc kubenswrapper[4860]: I1014 15:08:46.708142 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-vz7jg"] Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.071928 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32953d12-ab2e-48e0-8aeb-866e464d6ec4" path="/var/lib/kubelet/pods/32953d12-ab2e-48e0-8aeb-866e464d6ec4/volumes" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.106608 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.124564 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.131607 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t6p7\" (UniqueName: \"kubernetes.io/projected/b586fecc-3089-441e-8efa-8f84641f472b-kube-api-access-2t6p7\") pod \"b586fecc-3089-441e-8efa-8f84641f472b\" (UID: \"b586fecc-3089-441e-8efa-8f84641f472b\") " Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.135643 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b586fecc-3089-441e-8efa-8f84641f472b-kube-api-access-2t6p7" (OuterVolumeSpecName: "kube-api-access-2t6p7") pod "b586fecc-3089-441e-8efa-8f84641f472b" (UID: "b586fecc-3089-441e-8efa-8f84641f472b"). InnerVolumeSpecName "kube-api-access-2t6p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.233664 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64kf\" (UniqueName: \"kubernetes.io/projected/1b69c0c4-56e5-4100-a195-9d29ebee6719-kube-api-access-t64kf\") pod \"1b69c0c4-56e5-4100-a195-9d29ebee6719\" (UID: \"1b69c0c4-56e5-4100-a195-9d29ebee6719\") " Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.234177 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t6p7\" (UniqueName: \"kubernetes.io/projected/b586fecc-3089-441e-8efa-8f84641f472b-kube-api-access-2t6p7\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.237937 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b69c0c4-56e5-4100-a195-9d29ebee6719-kube-api-access-t64kf" (OuterVolumeSpecName: "kube-api-access-t64kf") pod "1b69c0c4-56e5-4100-a195-9d29ebee6719" (UID: "1b69c0c4-56e5-4100-a195-9d29ebee6719"). InnerVolumeSpecName "kube-api-access-t64kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.335750 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t64kf\" (UniqueName: \"kubernetes.io/projected/1b69c0c4-56e5-4100-a195-9d29ebee6719-kube-api-access-t64kf\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.642830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dedc-account-create-r4p7j" event={"ID":"b586fecc-3089-441e-8efa-8f84641f472b","Type":"ContainerDied","Data":"d131618bb9d4d6deea5e3f52bf0ae97c8ab0fa6002a7353c347b6a5561f1345f"} Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.642905 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d131618bb9d4d6deea5e3f52bf0ae97c8ab0fa6002a7353c347b6a5561f1345f" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.643125 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dedc-account-create-r4p7j" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.644611 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9be28925-a379-4ecf-8021-5a16dbd9b666","Type":"ContainerStarted","Data":"751edfdf07ecb3e83e9c5f8dac8b17074b72349ee18aef12138f790dd1181709"} Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.644644 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9be28925-a379-4ecf-8021-5a16dbd9b666","Type":"ContainerStarted","Data":"77cf8846eccf6421ad5e5d3a24331b9702c3836f7ac51524bb9bc72a82eb5d4c"} Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.644965 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.646998 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed1d-account-create-hgzrl" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.646986 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed1d-account-create-hgzrl" event={"ID":"1b69c0c4-56e5-4100-a195-9d29ebee6719","Type":"ContainerDied","Data":"d71463ed502a48775190e0d2d42d9bcd8438b0e1c318eaa5c0fa49ef42e17863"} Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.647067 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d71463ed502a48775190e0d2d42d9bcd8438b0e1c318eaa5c0fa49ef42e17863" Oct 14 15:08:47 crc kubenswrapper[4860]: I1014 15:08:47.666821 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.751149864 podStartE2EDuration="4.666796453s" podCreationTimestamp="2025-10-14 15:08:43 +0000 UTC" firstStartedPulling="2025-10-14 15:08:44.993789432 +0000 UTC m=+1186.580572881" lastFinishedPulling="2025-10-14 15:08:46.909436021 +0000 UTC m=+1188.496219470" observedRunningTime="2025-10-14 15:08:47.659898807 +0000 UTC m=+1189.246682276" watchObservedRunningTime="2025-10-14 15:08:47.666796453 +0000 UTC m=+1189.253579902" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.745785 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sc6wm" podUID="8fbd86ca-1d38-4b27-bd36-62198c367b3d" containerName="ovn-controller" probeResult="failure" output=< Oct 14 15:08:48 crc kubenswrapper[4860]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 15:08:48 crc kubenswrapper[4860]: > Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835343 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-b8tzr"] Oct 14 15:08:48 crc kubenswrapper[4860]: E1014 15:08:48.835648 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b586fecc-3089-441e-8efa-8f84641f472b" containerName="mariadb-account-create" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835665 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b586fecc-3089-441e-8efa-8f84641f472b" containerName="mariadb-account-create" Oct 14 15:08:48 crc kubenswrapper[4860]: E1014 15:08:48.835686 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b69c0c4-56e5-4100-a195-9d29ebee6719" containerName="mariadb-account-create" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835692 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b69c0c4-56e5-4100-a195-9d29ebee6719" containerName="mariadb-account-create" Oct 14 15:08:48 crc kubenswrapper[4860]: E1014 15:08:48.835711 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32953d12-ab2e-48e0-8aeb-866e464d6ec4" containerName="init" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835718 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="32953d12-ab2e-48e0-8aeb-866e464d6ec4" containerName="init" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835852 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b69c0c4-56e5-4100-a195-9d29ebee6719" containerName="mariadb-account-create" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835867 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="32953d12-ab2e-48e0-8aeb-866e464d6ec4" containerName="init" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.835874 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b586fecc-3089-441e-8efa-8f84641f472b" containerName="mariadb-account-create" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.836421 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.838875 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.839153 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-g6hpd" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.848191 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b8tzr"] Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.858472 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-db-sync-config-data\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.858568 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-config-data\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.858622 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-combined-ca-bundle\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.858704 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zpl4\" (UniqueName: \"kubernetes.io/projected/06b62797-8a97-4db0-a6ca-e7b2172ddb78-kube-api-access-4zpl4\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.926288 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.943270 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vbhtr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.960888 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zpl4\" (UniqueName: \"kubernetes.io/projected/06b62797-8a97-4db0-a6ca-e7b2172ddb78-kube-api-access-4zpl4\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.960973 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-db-sync-config-data\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.961083 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-config-data\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.961157 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-combined-ca-bundle\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.969005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-db-sync-config-data\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.969234 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-config-data\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.975301 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-combined-ca-bundle\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:48 crc kubenswrapper[4860]: I1014 15:08:48.987874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zpl4\" (UniqueName: \"kubernetes.io/projected/06b62797-8a97-4db0-a6ca-e7b2172ddb78-kube-api-access-4zpl4\") pod \"glance-db-sync-b8tzr\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.154014 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b8tzr" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.184102 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sc6wm-config-qwskg"] Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.185084 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.188592 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.204077 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sc6wm-config-qwskg"] Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.264761 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run-ovn\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.264808 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-scripts\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.264846 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-additional-scripts\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.264876 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctmj7\" (UniqueName: \"kubernetes.io/projected/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-kube-api-access-ctmj7\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.264937 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-log-ovn\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.264958 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.366258 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-log-ovn\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367091 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-log-ovn\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367140 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367225 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367361 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run-ovn\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-scripts\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-additional-scripts\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367461 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run-ovn\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.367523 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctmj7\" (UniqueName: \"kubernetes.io/projected/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-kube-api-access-ctmj7\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.368327 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-additional-scripts\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.369370 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-scripts\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.391690 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctmj7\" (UniqueName: \"kubernetes.io/projected/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-kube-api-access-ctmj7\") pod \"ovn-controller-sc6wm-config-qwskg\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.591278 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:49 crc kubenswrapper[4860]: I1014 15:08:49.808342 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b8tzr"] Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.113206 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.113982 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sc6wm-config-qwskg"] Oct 14 15:08:50 crc kubenswrapper[4860]: W1014 15:08:50.125926 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41efd5d_3d6c_46cf_86f5_1c09d21e328c.slice/crio-5c282f689fab129e519f4e500920cc418c0bd712aeaa8bc51c349fc8a2e874b9 WatchSource:0}: Error finding container 5c282f689fab129e519f4e500920cc418c0bd712aeaa8bc51c349fc8a2e874b9: Status 404 returned error can't find the container with id 5c282f689fab129e519f4e500920cc418c0bd712aeaa8bc51c349fc8a2e874b9 Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.526768 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-846cm"] Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.528146 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-846cm" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.540689 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-846cm"] Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.584236 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.683053 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-qwskg" event={"ID":"c41efd5d-3d6c-46cf-86f5-1c09d21e328c","Type":"ContainerStarted","Data":"d1eaf723ebba156258c6570387c8d3c1cbda0f25874c523ade080f466724e5a7"} Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.683360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-qwskg" event={"ID":"c41efd5d-3d6c-46cf-86f5-1c09d21e328c","Type":"ContainerStarted","Data":"5c282f689fab129e519f4e500920cc418c0bd712aeaa8bc51c349fc8a2e874b9"} Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.691828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b8tzr" event={"ID":"06b62797-8a97-4db0-a6ca-e7b2172ddb78","Type":"ContainerStarted","Data":"509354b4759c86555bd7278af4c261ce3f7647b7a3a4389e941cd4e416cfff54"} Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.710198 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlsxz\" (UniqueName: \"kubernetes.io/projected/4de56759-b727-495a-b9dd-3daa0cd45527-kube-api-access-mlsxz\") pod \"cinder-db-create-846cm\" (UID: \"4de56759-b727-495a-b9dd-3daa0cd45527\") " pod="openstack/cinder-db-create-846cm" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.716917 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sc6wm-config-qwskg" podStartSLOduration=1.716904443 podStartE2EDuration="1.716904443s" podCreationTimestamp="2025-10-14 15:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:50.710935389 +0000 UTC m=+1192.297718848" watchObservedRunningTime="2025-10-14 15:08:50.716904443 +0000 UTC m=+1192.303687892" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.746194 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-gpw62"] Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.747531 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.770166 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gpw62"] Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.812222 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlsxz\" (UniqueName: \"kubernetes.io/projected/4de56759-b727-495a-b9dd-3daa0cd45527-kube-api-access-mlsxz\") pod \"cinder-db-create-846cm\" (UID: \"4de56759-b727-495a-b9dd-3daa0cd45527\") " pod="openstack/cinder-db-create-846cm" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.865905 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlsxz\" (UniqueName: \"kubernetes.io/projected/4de56759-b727-495a-b9dd-3daa0cd45527-kube-api-access-mlsxz\") pod \"cinder-db-create-846cm\" (UID: \"4de56759-b727-495a-b9dd-3daa0cd45527\") " pod="openstack/cinder-db-create-846cm" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.915249 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfzd\" (UniqueName: \"kubernetes.io/projected/699edce6-a0b8-48e8-b5cb-b27747a6c048-kube-api-access-gcfzd\") pod \"barbican-db-create-gpw62\" (UID: \"699edce6-a0b8-48e8-b5cb-b27747a6c048\") " pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.921734 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-d7fsm"] Oct 14 15:08:50 crc kubenswrapper[4860]: I1014 15:08:50.940243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.015629 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-d7fsm"] Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.018362 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcfzd\" (UniqueName: \"kubernetes.io/projected/699edce6-a0b8-48e8-b5cb-b27747a6c048-kube-api-access-gcfzd\") pod \"barbican-db-create-gpw62\" (UID: \"699edce6-a0b8-48e8-b5cb-b27747a6c048\") " pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.088496 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcfzd\" (UniqueName: \"kubernetes.io/projected/699edce6-a0b8-48e8-b5cb-b27747a6c048-kube-api-access-gcfzd\") pod \"barbican-db-create-gpw62\" (UID: \"699edce6-a0b8-48e8-b5cb-b27747a6c048\") " pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.123220 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlgr\" (UniqueName: \"kubernetes.io/projected/532385fd-6404-44f6-93fa-0bfcf9b16662-kube-api-access-zwlgr\") pod \"neutron-db-create-d7fsm\" (UID: \"532385fd-6404-44f6-93fa-0bfcf9b16662\") " pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.144012 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-846cm" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.224434 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlgr\" (UniqueName: \"kubernetes.io/projected/532385fd-6404-44f6-93fa-0bfcf9b16662-kube-api-access-zwlgr\") pod \"neutron-db-create-d7fsm\" (UID: \"532385fd-6404-44f6-93fa-0bfcf9b16662\") " pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.244420 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlgr\" (UniqueName: \"kubernetes.io/projected/532385fd-6404-44f6-93fa-0bfcf9b16662-kube-api-access-zwlgr\") pod \"neutron-db-create-d7fsm\" (UID: \"532385fd-6404-44f6-93fa-0bfcf9b16662\") " pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.346239 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.370856 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.742678 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-846cm"] Oct 14 15:08:51 crc kubenswrapper[4860]: I1014 15:08:51.982943 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-d7fsm"] Oct 14 15:08:51 crc kubenswrapper[4860]: W1014 15:08:51.999047 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532385fd_6404_44f6_93fa_0bfcf9b16662.slice/crio-69e21ae8dc72c7c42e0c80555d9059d679d4995fbe368ffa63728e352af2274d WatchSource:0}: Error finding container 69e21ae8dc72c7c42e0c80555d9059d679d4995fbe368ffa63728e352af2274d: Status 404 returned error can't find the container with id 69e21ae8dc72c7c42e0c80555d9059d679d4995fbe368ffa63728e352af2274d Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.072268 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gpw62"] Oct 14 15:08:52 crc kubenswrapper[4860]: W1014 15:08:52.089275 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod699edce6_a0b8_48e8_b5cb_b27747a6c048.slice/crio-b2684705867b7af3e83a208574e8cb9487fd3c6a77e12ca8d21174214de74873 WatchSource:0}: Error finding container b2684705867b7af3e83a208574e8cb9487fd3c6a77e12ca8d21174214de74873: Status 404 returned error can't find the container with id b2684705867b7af3e83a208574e8cb9487fd3c6a77e12ca8d21174214de74873 Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.249686 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:08:52 crc kubenswrapper[4860]: E1014 15:08:52.249852 4860 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 15:08:52 crc kubenswrapper[4860]: E1014 15:08:52.249869 4860 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 15:08:52 crc kubenswrapper[4860]: E1014 15:08:52.249918 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift podName:e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a nodeName:}" failed. No retries permitted until 2025-10-14 15:09:08.24990181 +0000 UTC m=+1209.836685259 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift") pod "swift-storage-0" (UID: "e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a") : configmap "swift-ring-files" not found Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.714343 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7fsm" event={"ID":"532385fd-6404-44f6-93fa-0bfcf9b16662","Type":"ContainerStarted","Data":"042f7ae01bbb24257b17a0d932c3fa4e2ec3a7f0b0793173f2a32e3eed83a2bc"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.714388 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7fsm" event={"ID":"532385fd-6404-44f6-93fa-0bfcf9b16662","Type":"ContainerStarted","Data":"69e21ae8dc72c7c42e0c80555d9059d679d4995fbe368ffa63728e352af2274d"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.717886 4860 generic.go:334] "Generic (PLEG): container finished" podID="c41efd5d-3d6c-46cf-86f5-1c09d21e328c" containerID="d1eaf723ebba156258c6570387c8d3c1cbda0f25874c523ade080f466724e5a7" exitCode=0 Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.717954 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-qwskg" event={"ID":"c41efd5d-3d6c-46cf-86f5-1c09d21e328c","Type":"ContainerDied","Data":"d1eaf723ebba156258c6570387c8d3c1cbda0f25874c523ade080f466724e5a7"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.724235 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-846cm" event={"ID":"4de56759-b727-495a-b9dd-3daa0cd45527","Type":"ContainerStarted","Data":"d07e1fa94614470d9fd6989a96974096b0feb61cebf557d163110cfc59c308b0"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.724278 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-846cm" event={"ID":"4de56759-b727-495a-b9dd-3daa0cd45527","Type":"ContainerStarted","Data":"745e43e8c595261990de386485d106cc5bd12649d2bd93882554ad307a791947"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.726988 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpw62" event={"ID":"699edce6-a0b8-48e8-b5cb-b27747a6c048","Type":"ContainerStarted","Data":"b7723773d1697440c4fb91152ff65041ed188e23139a42462d3f3678a29475e5"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.727019 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpw62" event={"ID":"699edce6-a0b8-48e8-b5cb-b27747a6c048","Type":"ContainerStarted","Data":"b2684705867b7af3e83a208574e8cb9487fd3c6a77e12ca8d21174214de74873"} Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.737419 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-d7fsm" podStartSLOduration=2.737393307 podStartE2EDuration="2.737393307s" podCreationTimestamp="2025-10-14 15:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:52.732794446 +0000 UTC m=+1194.319577895" watchObservedRunningTime="2025-10-14 15:08:52.737393307 +0000 UTC m=+1194.324176756" Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.756349 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-846cm" podStartSLOduration=2.7563341550000002 podStartE2EDuration="2.756334155s" podCreationTimestamp="2025-10-14 15:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:52.75282976 +0000 UTC m=+1194.339613209" watchObservedRunningTime="2025-10-14 15:08:52.756334155 +0000 UTC m=+1194.343117604" Oct 14 15:08:52 crc kubenswrapper[4860]: I1014 15:08:52.789426 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-gpw62" podStartSLOduration=2.789409134 podStartE2EDuration="2.789409134s" podCreationTimestamp="2025-10-14 15:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:52.784056855 +0000 UTC m=+1194.370840304" watchObservedRunningTime="2025-10-14 15:08:52.789409134 +0000 UTC m=+1194.376192583" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.075829 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54f4-account-create-5h8k4"] Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.076993 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.079949 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.094893 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54f4-account-create-5h8k4"] Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.187965 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b-kube-api-access-sqvrt\") pod \"keystone-54f4-account-create-5h8k4\" (UID: \"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b\") " pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.289451 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b-kube-api-access-sqvrt\") pod \"keystone-54f4-account-create-5h8k4\" (UID: \"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b\") " pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.315927 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b-kube-api-access-sqvrt\") pod \"keystone-54f4-account-create-5h8k4\" (UID: \"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b\") " pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.392834 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.800350 4860 generic.go:334] "Generic (PLEG): container finished" podID="699edce6-a0b8-48e8-b5cb-b27747a6c048" containerID="b7723773d1697440c4fb91152ff65041ed188e23139a42462d3f3678a29475e5" exitCode=0 Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.800598 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpw62" event={"ID":"699edce6-a0b8-48e8-b5cb-b27747a6c048","Type":"ContainerDied","Data":"b7723773d1697440c4fb91152ff65041ed188e23139a42462d3f3678a29475e5"} Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.814995 4860 generic.go:334] "Generic (PLEG): container finished" podID="532385fd-6404-44f6-93fa-0bfcf9b16662" containerID="042f7ae01bbb24257b17a0d932c3fa4e2ec3a7f0b0793173f2a32e3eed83a2bc" exitCode=0 Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.815078 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7fsm" event={"ID":"532385fd-6404-44f6-93fa-0bfcf9b16662","Type":"ContainerDied","Data":"042f7ae01bbb24257b17a0d932c3fa4e2ec3a7f0b0793173f2a32e3eed83a2bc"} Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.820154 4860 generic.go:334] "Generic (PLEG): container finished" podID="4de56759-b727-495a-b9dd-3daa0cd45527" containerID="d07e1fa94614470d9fd6989a96974096b0feb61cebf557d163110cfc59c308b0" exitCode=0 Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.820260 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-846cm" event={"ID":"4de56759-b727-495a-b9dd-3daa0cd45527","Type":"ContainerDied","Data":"d07e1fa94614470d9fd6989a96974096b0feb61cebf557d163110cfc59c308b0"} Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.863450 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sc6wm" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.926863 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:08:53 crc kubenswrapper[4860]: I1014 15:08:53.949042 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54f4-account-create-5h8k4"] Oct 14 15:08:53 crc kubenswrapper[4860]: W1014 15:08:53.977083 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e2a8218_fd4f_44d9_b7bc_ae5fead34e2b.slice/crio-45dd2fbc298fc7fed39863af1caaebeeb46871ef245f8e7b4706beabcd824452 WatchSource:0}: Error finding container 45dd2fbc298fc7fed39863af1caaebeeb46871ef245f8e7b4706beabcd824452: Status 404 returned error can't find the container with id 45dd2fbc298fc7fed39863af1caaebeeb46871ef245f8e7b4706beabcd824452 Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.085535 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-95xfl"] Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.085761 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="dnsmasq-dns" containerID="cri-o://87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21" gracePeriod=10 Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.523637 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620522 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctmj7\" (UniqueName: \"kubernetes.io/projected/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-kube-api-access-ctmj7\") pod \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620663 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-scripts\") pod \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620716 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run\") pod \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run" (OuterVolumeSpecName: "var-run") pod "c41efd5d-3d6c-46cf-86f5-1c09d21e328c" (UID: "c41efd5d-3d6c-46cf-86f5-1c09d21e328c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620753 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-log-ovn\") pod \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620861 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run-ovn\") pod \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620907 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c41efd5d-3d6c-46cf-86f5-1c09d21e328c" (UID: "c41efd5d-3d6c-46cf-86f5-1c09d21e328c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.620951 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-additional-scripts\") pod \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\" (UID: \"c41efd5d-3d6c-46cf-86f5-1c09d21e328c\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.621090 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c41efd5d-3d6c-46cf-86f5-1c09d21e328c" (UID: "c41efd5d-3d6c-46cf-86f5-1c09d21e328c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.621746 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c41efd5d-3d6c-46cf-86f5-1c09d21e328c" (UID: "c41efd5d-3d6c-46cf-86f5-1c09d21e328c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.622492 4860 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.622508 4860 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.622517 4860 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.622527 4860 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.624997 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-scripts" (OuterVolumeSpecName: "scripts") pod "c41efd5d-3d6c-46cf-86f5-1c09d21e328c" (UID: "c41efd5d-3d6c-46cf-86f5-1c09d21e328c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.661330 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-kube-api-access-ctmj7" (OuterVolumeSpecName: "kube-api-access-ctmj7") pod "c41efd5d-3d6c-46cf-86f5-1c09d21e328c" (UID: "c41efd5d-3d6c-46cf-86f5-1c09d21e328c"). InnerVolumeSpecName "kube-api-access-ctmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.723667 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.723696 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctmj7\" (UniqueName: \"kubernetes.io/projected/c41efd5d-3d6c-46cf-86f5-1c09d21e328c-kube-api-access-ctmj7\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.834570 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.841554 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-qwskg" event={"ID":"c41efd5d-3d6c-46cf-86f5-1c09d21e328c","Type":"ContainerDied","Data":"5c282f689fab129e519f4e500920cc418c0bd712aeaa8bc51c349fc8a2e874b9"} Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.841597 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c282f689fab129e519f4e500920cc418c0bd712aeaa8bc51c349fc8a2e874b9" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.841567 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-qwskg" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.855119 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f4-account-create-5h8k4" event={"ID":"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b","Type":"ContainerStarted","Data":"ec7c31e794c33e06b23a2dd3b32b566620a41a1d670d4af29bee891190af6477"} Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.855408 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f4-account-create-5h8k4" event={"ID":"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b","Type":"ContainerStarted","Data":"45dd2fbc298fc7fed39863af1caaebeeb46871ef245f8e7b4706beabcd824452"} Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.858169 4860 generic.go:334] "Generic (PLEG): container finished" podID="9138e3ca-610f-4970-984b-626c6aab739d" containerID="87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21" exitCode=0 Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.858366 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.858712 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" event={"ID":"9138e3ca-610f-4970-984b-626c6aab739d","Type":"ContainerDied","Data":"87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21"} Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.858742 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" event={"ID":"9138e3ca-610f-4970-984b-626c6aab739d","Type":"ContainerDied","Data":"319974c90b4e0c8471c05a29e5710a6db7d9cb9c0cb4edb3c413b788f9a15911"} Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.858758 4860 scope.go:117] "RemoveContainer" containerID="87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.901057 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54f4-account-create-5h8k4" podStartSLOduration=1.901041092 podStartE2EDuration="1.901041092s" podCreationTimestamp="2025-10-14 15:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:08:54.892349842 +0000 UTC m=+1196.479133291" watchObservedRunningTime="2025-10-14 15:08:54.901041092 +0000 UTC m=+1196.487824541" Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.926339 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-config\") pod \"9138e3ca-610f-4970-984b-626c6aab739d\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.926681 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpx94\" (UniqueName: \"kubernetes.io/projected/9138e3ca-610f-4970-984b-626c6aab739d-kube-api-access-hpx94\") pod \"9138e3ca-610f-4970-984b-626c6aab739d\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.926808 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-dns-svc\") pod \"9138e3ca-610f-4970-984b-626c6aab739d\" (UID: \"9138e3ca-610f-4970-984b-626c6aab739d\") " Oct 14 15:08:54 crc kubenswrapper[4860]: I1014 15:08:54.938108 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9138e3ca-610f-4970-984b-626c6aab739d-kube-api-access-hpx94" (OuterVolumeSpecName: "kube-api-access-hpx94") pod "9138e3ca-610f-4970-984b-626c6aab739d" (UID: "9138e3ca-610f-4970-984b-626c6aab739d"). InnerVolumeSpecName "kube-api-access-hpx94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.002758 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sc6wm-config-qwskg"] Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.016831 4860 scope.go:117] "RemoveContainer" containerID="235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.021496 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9138e3ca-610f-4970-984b-626c6aab739d" (UID: "9138e3ca-610f-4970-984b-626c6aab739d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.028492 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpx94\" (UniqueName: \"kubernetes.io/projected/9138e3ca-610f-4970-984b-626c6aab739d-kube-api-access-hpx94\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.028521 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.034297 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sc6wm-config-qwskg"] Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.037055 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-config" (OuterVolumeSpecName: "config") pod "9138e3ca-610f-4970-984b-626c6aab739d" (UID: "9138e3ca-610f-4970-984b-626c6aab739d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.098941 4860 scope.go:117] "RemoveContainer" containerID="87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21" Oct 14 15:08:55 crc kubenswrapper[4860]: E1014 15:08:55.106847 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21\": container with ID starting with 87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21 not found: ID does not exist" containerID="87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.106904 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21"} err="failed to get container status \"87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21\": rpc error: code = NotFound desc = could not find container \"87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21\": container with ID starting with 87a55c9f390fea02e08720e367b4f7555e4daaf5063991cb63c0468b83f6fa21 not found: ID does not exist" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.106946 4860 scope.go:117] "RemoveContainer" containerID="235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973" Oct 14 15:08:55 crc kubenswrapper[4860]: E1014 15:08:55.107723 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973\": container with ID starting with 235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973 not found: ID does not exist" containerID="235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.107743 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973"} err="failed to get container status \"235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973\": rpc error: code = NotFound desc = could not find container \"235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973\": container with ID starting with 235c7761e9aab92e0ccffdde38adcce29a13bae8eb21e1f8c0b549a2278e8973 not found: ID does not exist" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.108654 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41efd5d-3d6c-46cf-86f5-1c09d21e328c" path="/var/lib/kubelet/pods/c41efd5d-3d6c-46cf-86f5-1c09d21e328c/volumes" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.120731 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sc6wm-config-b8txz"] Oct 14 15:08:55 crc kubenswrapper[4860]: E1014 15:08:55.130559 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="dnsmasq-dns" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.130590 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="dnsmasq-dns" Oct 14 15:08:55 crc kubenswrapper[4860]: E1014 15:08:55.130615 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="init" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.130622 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="init" Oct 14 15:08:55 crc kubenswrapper[4860]: E1014 15:08:55.130653 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41efd5d-3d6c-46cf-86f5-1c09d21e328c" containerName="ovn-config" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.130660 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41efd5d-3d6c-46cf-86f5-1c09d21e328c" containerName="ovn-config" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.130899 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="dnsmasq-dns" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.130960 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41efd5d-3d6c-46cf-86f5-1c09d21e328c" containerName="ovn-config" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.132358 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9138e3ca-610f-4970-984b-626c6aab739d-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.132440 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.143306 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.145853 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sc6wm-config-b8txz"] Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.211223 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-95xfl"] Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.234779 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-95xfl"] Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.234899 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run-ovn\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.235019 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.235088 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-log-ovn\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.235114 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-additional-scripts\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.235161 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p62q\" (UniqueName: \"kubernetes.io/projected/e39a8552-5798-4ad4-b296-03f62f450319-kube-api-access-5p62q\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.235204 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-scripts\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.338586 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-log-ovn\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.338636 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-additional-scripts\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.338673 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p62q\" (UniqueName: \"kubernetes.io/projected/e39a8552-5798-4ad4-b296-03f62f450319-kube-api-access-5p62q\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.338710 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-scripts\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.338759 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run-ovn\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.338817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.339094 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.339145 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-log-ovn\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.339761 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-additional-scripts\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.344374 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-scripts\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.344478 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run-ovn\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.362820 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p62q\" (UniqueName: \"kubernetes.io/projected/e39a8552-5798-4ad4-b296-03f62f450319-kube-api-access-5p62q\") pod \"ovn-controller-sc6wm-config-b8txz\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.466741 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.468741 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.562109 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwlgr\" (UniqueName: \"kubernetes.io/projected/532385fd-6404-44f6-93fa-0bfcf9b16662-kube-api-access-zwlgr\") pod \"532385fd-6404-44f6-93fa-0bfcf9b16662\" (UID: \"532385fd-6404-44f6-93fa-0bfcf9b16662\") " Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.565992 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532385fd-6404-44f6-93fa-0bfcf9b16662-kube-api-access-zwlgr" (OuterVolumeSpecName: "kube-api-access-zwlgr") pod "532385fd-6404-44f6-93fa-0bfcf9b16662" (UID: "532385fd-6404-44f6-93fa-0bfcf9b16662"). InnerVolumeSpecName "kube-api-access-zwlgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.663991 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwlgr\" (UniqueName: \"kubernetes.io/projected/532385fd-6404-44f6-93fa-0bfcf9b16662-kube-api-access-zwlgr\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.668799 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-846cm" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.765205 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlsxz\" (UniqueName: \"kubernetes.io/projected/4de56759-b727-495a-b9dd-3daa0cd45527-kube-api-access-mlsxz\") pod \"4de56759-b727-495a-b9dd-3daa0cd45527\" (UID: \"4de56759-b727-495a-b9dd-3daa0cd45527\") " Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.769544 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de56759-b727-495a-b9dd-3daa0cd45527-kube-api-access-mlsxz" (OuterVolumeSpecName: "kube-api-access-mlsxz") pod "4de56759-b727-495a-b9dd-3daa0cd45527" (UID: "4de56759-b727-495a-b9dd-3daa0cd45527"). InnerVolumeSpecName "kube-api-access-mlsxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.790512 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.866322 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcfzd\" (UniqueName: \"kubernetes.io/projected/699edce6-a0b8-48e8-b5cb-b27747a6c048-kube-api-access-gcfzd\") pod \"699edce6-a0b8-48e8-b5cb-b27747a6c048\" (UID: \"699edce6-a0b8-48e8-b5cb-b27747a6c048\") " Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.866688 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlsxz\" (UniqueName: \"kubernetes.io/projected/4de56759-b727-495a-b9dd-3daa0cd45527-kube-api-access-mlsxz\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.869709 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-846cm" event={"ID":"4de56759-b727-495a-b9dd-3daa0cd45527","Type":"ContainerDied","Data":"745e43e8c595261990de386485d106cc5bd12649d2bd93882554ad307a791947"} Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.869750 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="745e43e8c595261990de386485d106cc5bd12649d2bd93882554ad307a791947" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.869807 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-846cm" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.870066 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699edce6-a0b8-48e8-b5cb-b27747a6c048-kube-api-access-gcfzd" (OuterVolumeSpecName: "kube-api-access-gcfzd") pod "699edce6-a0b8-48e8-b5cb-b27747a6c048" (UID: "699edce6-a0b8-48e8-b5cb-b27747a6c048"). InnerVolumeSpecName "kube-api-access-gcfzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.884654 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpw62" event={"ID":"699edce6-a0b8-48e8-b5cb-b27747a6c048","Type":"ContainerDied","Data":"b2684705867b7af3e83a208574e8cb9487fd3c6a77e12ca8d21174214de74873"} Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.884695 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2684705867b7af3e83a208574e8cb9487fd3c6a77e12ca8d21174214de74873" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.884765 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpw62" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.900879 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7fsm" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.901194 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7fsm" event={"ID":"532385fd-6404-44f6-93fa-0bfcf9b16662","Type":"ContainerDied","Data":"69e21ae8dc72c7c42e0c80555d9059d679d4995fbe368ffa63728e352af2274d"} Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.901240 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e21ae8dc72c7c42e0c80555d9059d679d4995fbe368ffa63728e352af2274d" Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.915770 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b" containerID="ec7c31e794c33e06b23a2dd3b32b566620a41a1d670d4af29bee891190af6477" exitCode=0 Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.916390 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f4-account-create-5h8k4" event={"ID":"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b","Type":"ContainerDied","Data":"ec7c31e794c33e06b23a2dd3b32b566620a41a1d670d4af29bee891190af6477"} Oct 14 15:08:55 crc kubenswrapper[4860]: I1014 15:08:55.968379 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcfzd\" (UniqueName: \"kubernetes.io/projected/699edce6-a0b8-48e8-b5cb-b27747a6c048-kube-api-access-gcfzd\") on node \"crc\" DevicePath \"\"" Oct 14 15:08:56 crc kubenswrapper[4860]: I1014 15:08:56.208266 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sc6wm-config-b8txz"] Oct 14 15:08:56 crc kubenswrapper[4860]: W1014 15:08:56.225465 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode39a8552_5798_4ad4_b296_03f62f450319.slice/crio-1fdf8a3822e4222e2c9b3ecaa82fe73c74bc40c2d2c13ea9ee55ccf4ced664d4 WatchSource:0}: Error finding container 1fdf8a3822e4222e2c9b3ecaa82fe73c74bc40c2d2c13ea9ee55ccf4ced664d4: Status 404 returned error can't find the container with id 1fdf8a3822e4222e2c9b3ecaa82fe73c74bc40c2d2c13ea9ee55ccf4ced664d4 Oct 14 15:08:56 crc kubenswrapper[4860]: I1014 15:08:56.931073 4860 generic.go:334] "Generic (PLEG): container finished" podID="e39a8552-5798-4ad4-b296-03f62f450319" containerID="edaaf103a300e98b5662cbf9cb71f3bbbc54d1cf253d9dff60f253a727961e2b" exitCode=0 Oct 14 15:08:56 crc kubenswrapper[4860]: I1014 15:08:56.931413 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-b8txz" event={"ID":"e39a8552-5798-4ad4-b296-03f62f450319","Type":"ContainerDied","Data":"edaaf103a300e98b5662cbf9cb71f3bbbc54d1cf253d9dff60f253a727961e2b"} Oct 14 15:08:56 crc kubenswrapper[4860]: I1014 15:08:56.931437 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-b8txz" event={"ID":"e39a8552-5798-4ad4-b296-03f62f450319","Type":"ContainerStarted","Data":"1fdf8a3822e4222e2c9b3ecaa82fe73c74bc40c2d2c13ea9ee55ccf4ced664d4"} Oct 14 15:08:56 crc kubenswrapper[4860]: I1014 15:08:56.933288 4860 generic.go:334] "Generic (PLEG): container finished" podID="4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" containerID="d889ae149d75437610bee3220ea50edcb778a377ef7deb794b069109ba64d533" exitCode=0 Oct 14 15:08:56 crc kubenswrapper[4860]: I1014 15:08:56.933466 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b2bd2" event={"ID":"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd","Type":"ContainerDied","Data":"d889ae149d75437610bee3220ea50edcb778a377ef7deb794b069109ba64d533"} Oct 14 15:08:57 crc kubenswrapper[4860]: I1014 15:08:57.073325 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9138e3ca-610f-4970-984b-626c6aab739d" path="/var/lib/kubelet/pods/9138e3ca-610f-4970-984b-626c6aab739d/volumes" Oct 14 15:08:59 crc kubenswrapper[4860]: I1014 15:08:59.397820 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 14 15:08:59 crc kubenswrapper[4860]: I1014 15:08:59.532212 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-95xfl" podUID="9138e3ca-610f-4970-984b-626c6aab739d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.930546 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.940104 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.947826 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.973740 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7c2\" (UniqueName: \"kubernetes.io/projected/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-kube-api-access-cc7c2\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.973827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-etc-swift\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.973898 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-combined-ca-bundle\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.973962 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-dispersionconf\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.974005 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-swiftconf\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.974076 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-scripts\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.974146 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-ring-data-devices\") pod \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\" (UID: \"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd\") " Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.979176 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.980370 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.980584 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sc6wm-config-b8txz" event={"ID":"e39a8552-5798-4ad4-b296-03f62f450319","Type":"ContainerDied","Data":"1fdf8a3822e4222e2c9b3ecaa82fe73c74bc40c2d2c13ea9ee55ccf4ced664d4"} Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.980615 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fdf8a3822e4222e2c9b3ecaa82fe73c74bc40c2d2c13ea9ee55ccf4ced664d4" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.980687 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sc6wm-config-b8txz" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.987576 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f4-account-create-5h8k4" event={"ID":"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b","Type":"ContainerDied","Data":"45dd2fbc298fc7fed39863af1caaebeeb46871ef245f8e7b4706beabcd824452"} Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.987597 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f4-account-create-5h8k4" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.987611 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45dd2fbc298fc7fed39863af1caaebeeb46871ef245f8e7b4706beabcd824452" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.996899 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b2bd2" event={"ID":"4cc19e55-2664-49bd-8f7e-856d1c9b3ecd","Type":"ContainerDied","Data":"f9bcfe6154f381448458709d6dc035f3c1ef03c758db344a18c6474fa9e9ed36"} Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.997285 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bcfe6154f381448458709d6dc035f3c1ef03c758db344a18c6474fa9e9ed36" Oct 14 15:09:00 crc kubenswrapper[4860]: I1014 15:09:00.997538 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b2bd2" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.007000 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.008324 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-scripts" (OuterVolumeSpecName: "scripts") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.013561 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-kube-api-access-cc7c2" (OuterVolumeSpecName: "kube-api-access-cc7c2") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "kube-api-access-cc7c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.033514 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.044458 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" (UID: "4cc19e55-2664-49bd-8f7e-856d1c9b3ecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.076598 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-scripts\") pod \"e39a8552-5798-4ad4-b296-03f62f450319\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.076856 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p62q\" (UniqueName: \"kubernetes.io/projected/e39a8552-5798-4ad4-b296-03f62f450319-kube-api-access-5p62q\") pod \"e39a8552-5798-4ad4-b296-03f62f450319\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.076892 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b-kube-api-access-sqvrt\") pod \"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b\" (UID: \"2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.076957 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-log-ovn\") pod \"e39a8552-5798-4ad4-b296-03f62f450319\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.076979 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-additional-scripts\") pod \"e39a8552-5798-4ad4-b296-03f62f450319\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077016 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run\") pod \"e39a8552-5798-4ad4-b296-03f62f450319\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077068 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run-ovn\") pod \"e39a8552-5798-4ad4-b296-03f62f450319\" (UID: \"e39a8552-5798-4ad4-b296-03f62f450319\") " Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077274 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e39a8552-5798-4ad4-b296-03f62f450319" (UID: "e39a8552-5798-4ad4-b296-03f62f450319"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077668 4860 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077683 4860 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077694 4860 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077703 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077711 4860 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077720 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc7c2\" (UniqueName: \"kubernetes.io/projected/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-kube-api-access-cc7c2\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077728 4860 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077736 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc19e55-2664-49bd-8f7e-856d1c9b3ecd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077655 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-scripts" (OuterVolumeSpecName: "scripts") pod "e39a8552-5798-4ad4-b296-03f62f450319" (UID: "e39a8552-5798-4ad4-b296-03f62f450319"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077697 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run" (OuterVolumeSpecName: "var-run") pod "e39a8552-5798-4ad4-b296-03f62f450319" (UID: "e39a8552-5798-4ad4-b296-03f62f450319"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.077773 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e39a8552-5798-4ad4-b296-03f62f450319" (UID: "e39a8552-5798-4ad4-b296-03f62f450319"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.078255 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e39a8552-5798-4ad4-b296-03f62f450319" (UID: "e39a8552-5798-4ad4-b296-03f62f450319"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.080721 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39a8552-5798-4ad4-b296-03f62f450319-kube-api-access-5p62q" (OuterVolumeSpecName: "kube-api-access-5p62q") pod "e39a8552-5798-4ad4-b296-03f62f450319" (UID: "e39a8552-5798-4ad4-b296-03f62f450319"). InnerVolumeSpecName "kube-api-access-5p62q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.080846 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b-kube-api-access-sqvrt" (OuterVolumeSpecName: "kube-api-access-sqvrt") pod "2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b" (UID: "2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b"). InnerVolumeSpecName "kube-api-access-sqvrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.179411 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.179471 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p62q\" (UniqueName: \"kubernetes.io/projected/e39a8552-5798-4ad4-b296-03f62f450319-kube-api-access-5p62q\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.179482 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b-kube-api-access-sqvrt\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.179490 4860 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e39a8552-5798-4ad4-b296-03f62f450319-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.179526 4860 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:01 crc kubenswrapper[4860]: I1014 15:09:01.179536 4860 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e39a8552-5798-4ad4-b296-03f62f450319-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:02 crc kubenswrapper[4860]: I1014 15:09:02.068867 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sc6wm-config-b8txz"] Oct 14 15:09:02 crc kubenswrapper[4860]: I1014 15:09:02.074523 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sc6wm-config-b8txz"] Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.070518 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39a8552-5798-4ad4-b296-03f62f450319" path="/var/lib/kubelet/pods/e39a8552-5798-4ad4-b296-03f62f450319/volumes" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661292 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7dg8n"] Oct 14 15:09:03 crc kubenswrapper[4860]: E1014 15:09:03.661694 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532385fd-6404-44f6-93fa-0bfcf9b16662" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661719 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="532385fd-6404-44f6-93fa-0bfcf9b16662" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: E1014 15:09:03.661744 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de56759-b727-495a-b9dd-3daa0cd45527" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661754 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de56759-b727-495a-b9dd-3daa0cd45527" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: E1014 15:09:03.661767 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39a8552-5798-4ad4-b296-03f62f450319" containerName="ovn-config" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661775 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39a8552-5798-4ad4-b296-03f62f450319" containerName="ovn-config" Oct 14 15:09:03 crc kubenswrapper[4860]: E1014 15:09:03.661785 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" containerName="swift-ring-rebalance" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661792 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" containerName="swift-ring-rebalance" Oct 14 15:09:03 crc kubenswrapper[4860]: E1014 15:09:03.661805 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b" containerName="mariadb-account-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661812 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b" containerName="mariadb-account-create" Oct 14 15:09:03 crc kubenswrapper[4860]: E1014 15:09:03.661829 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699edce6-a0b8-48e8-b5cb-b27747a6c048" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.661837 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="699edce6-a0b8-48e8-b5cb-b27747a6c048" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662056 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="532385fd-6404-44f6-93fa-0bfcf9b16662" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662101 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39a8552-5798-4ad4-b296-03f62f450319" containerName="ovn-config" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662120 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc19e55-2664-49bd-8f7e-856d1c9b3ecd" containerName="swift-ring-rebalance" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662152 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de56759-b727-495a-b9dd-3daa0cd45527" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662170 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b" containerName="mariadb-account-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662183 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="699edce6-a0b8-48e8-b5cb-b27747a6c048" containerName="mariadb-database-create" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.662828 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.668019 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.668154 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xsgm5" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.668303 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.671148 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.687228 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7dg8n"] Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.720641 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-combined-ca-bundle\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.720742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh5b6\" (UniqueName: \"kubernetes.io/projected/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-kube-api-access-fh5b6\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.720850 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-config-data\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.822414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh5b6\" (UniqueName: \"kubernetes.io/projected/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-kube-api-access-fh5b6\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.822520 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-config-data\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.822555 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-combined-ca-bundle\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.827717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-config-data\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.828215 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-combined-ca-bundle\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.845636 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh5b6\" (UniqueName: \"kubernetes.io/projected/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-kube-api-access-fh5b6\") pod \"keystone-db-sync-7dg8n\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:03 crc kubenswrapper[4860]: I1014 15:09:03.993850 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:08 crc kubenswrapper[4860]: I1014 15:09:08.329766 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:09:08 crc kubenswrapper[4860]: I1014 15:09:08.338917 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a-etc-swift\") pod \"swift-storage-0\" (UID: \"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a\") " pod="openstack/swift-storage-0" Oct 14 15:09:08 crc kubenswrapper[4860]: I1014 15:09:08.375831 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:09.496265 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7dg8n"] Oct 14 15:09:10 crc kubenswrapper[4860]: W1014 15:09:09.566467 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7daefc0_ac71_4a73_9da7_7cf2fecfaf4a.slice/crio-6502a730f8959cd63fa39b97565b2e97a7ed9bb02a8adcda040ff768d9bb69c0 WatchSource:0}: Error finding container 6502a730f8959cd63fa39b97565b2e97a7ed9bb02a8adcda040ff768d9bb69c0: Status 404 returned error can't find the container with id 6502a730f8959cd63fa39b97565b2e97a7ed9bb02a8adcda040ff768d9bb69c0 Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:09.578419 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.075225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7dg8n" event={"ID":"ca190f5b-bd3e-4628-a10d-6b8de6e826d8","Type":"ContainerStarted","Data":"5647649ce5ebdb9079b0bcfe41ab92cc2620a73d368f37513de53a813c79d7ca"} Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.076913 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b8tzr" event={"ID":"06b62797-8a97-4db0-a6ca-e7b2172ddb78","Type":"ContainerStarted","Data":"0898e642f989278d78317aeabfc64b13728aaf1b34251fdc3e2493d641c4b355"} Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.080565 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"6502a730f8959cd63fa39b97565b2e97a7ed9bb02a8adcda040ff768d9bb69c0"} Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.112903 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-b8tzr" podStartSLOduration=2.598323136 podStartE2EDuration="22.112885295s" podCreationTimestamp="2025-10-14 15:08:48 +0000 UTC" firstStartedPulling="2025-10-14 15:08:49.821995405 +0000 UTC m=+1191.408778854" lastFinishedPulling="2025-10-14 15:09:09.336557564 +0000 UTC m=+1210.923341013" observedRunningTime="2025-10-14 15:09:10.107216378 +0000 UTC m=+1211.693999837" watchObservedRunningTime="2025-10-14 15:09:10.112885295 +0000 UTC m=+1211.699668764" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.675696 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6793-account-create-h8c74"] Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.679732 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.684845 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6793-account-create-h8c74"] Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.700690 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.769369 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bb9\" (UniqueName: \"kubernetes.io/projected/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5-kube-api-access-s6bb9\") pod \"cinder-6793-account-create-h8c74\" (UID: \"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5\") " pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.870343 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6bb9\" (UniqueName: \"kubernetes.io/projected/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5-kube-api-access-s6bb9\") pod \"cinder-6793-account-create-h8c74\" (UID: \"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5\") " pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.916100 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9386-account-create-bbktn"] Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.917928 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.922997 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9386-account-create-bbktn"] Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.930427 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.961863 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6bb9\" (UniqueName: \"kubernetes.io/projected/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5-kube-api-access-s6bb9\") pod \"cinder-6793-account-create-h8c74\" (UID: \"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5\") " pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:10 crc kubenswrapper[4860]: I1014 15:09:10.984669 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmktq\" (UniqueName: \"kubernetes.io/projected/2552725f-0e4f-4766-b564-6c01b225d5d5-kube-api-access-zmktq\") pod \"barbican-9386-account-create-bbktn\" (UID: \"2552725f-0e4f-4766-b564-6c01b225d5d5\") " pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.023433 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.085783 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmktq\" (UniqueName: \"kubernetes.io/projected/2552725f-0e4f-4766-b564-6c01b225d5d5-kube-api-access-zmktq\") pod \"barbican-9386-account-create-bbktn\" (UID: \"2552725f-0e4f-4766-b564-6c01b225d5d5\") " pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.110195 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-95d5-account-create-26t2x"] Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.111481 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.114425 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.121905 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95d5-account-create-26t2x"] Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.142556 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmktq\" (UniqueName: \"kubernetes.io/projected/2552725f-0e4f-4766-b564-6c01b225d5d5-kube-api-access-zmktq\") pod \"barbican-9386-account-create-bbktn\" (UID: \"2552725f-0e4f-4766-b564-6c01b225d5d5\") " pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.188143 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frv49\" (UniqueName: \"kubernetes.io/projected/48c09b2e-db9d-4989-9f7a-a0f94458d4d5-kube-api-access-frv49\") pod \"neutron-95d5-account-create-26t2x\" (UID: \"48c09b2e-db9d-4989-9f7a-a0f94458d4d5\") " pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.291010 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frv49\" (UniqueName: \"kubernetes.io/projected/48c09b2e-db9d-4989-9f7a-a0f94458d4d5-kube-api-access-frv49\") pod \"neutron-95d5-account-create-26t2x\" (UID: \"48c09b2e-db9d-4989-9f7a-a0f94458d4d5\") " pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.319927 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.324539 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frv49\" (UniqueName: \"kubernetes.io/projected/48c09b2e-db9d-4989-9f7a-a0f94458d4d5-kube-api-access-frv49\") pod \"neutron-95d5-account-create-26t2x\" (UID: \"48c09b2e-db9d-4989-9f7a-a0f94458d4d5\") " pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.441458 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.679203 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6793-account-create-h8c74"] Oct 14 15:09:11 crc kubenswrapper[4860]: W1014 15:09:11.703145 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51429dab_2e49_4ee4_8fcf_4ecd0070b5a5.slice/crio-e02a2d81168f13ae44fa588897f89b4a86fed3e5c9bc611f04f28b5d424ce894 WatchSource:0}: Error finding container e02a2d81168f13ae44fa588897f89b4a86fed3e5c9bc611f04f28b5d424ce894: Status 404 returned error can't find the container with id e02a2d81168f13ae44fa588897f89b4a86fed3e5c9bc611f04f28b5d424ce894 Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.827009 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9386-account-create-bbktn"] Oct 14 15:09:11 crc kubenswrapper[4860]: W1014 15:09:11.845224 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2552725f_0e4f_4766_b564_6c01b225d5d5.slice/crio-6748888563caaea6d49f4697cc0a44d05e3313a40a50145bdd08fe6fb28a92aa WatchSource:0}: Error finding container 6748888563caaea6d49f4697cc0a44d05e3313a40a50145bdd08fe6fb28a92aa: Status 404 returned error can't find the container with id 6748888563caaea6d49f4697cc0a44d05e3313a40a50145bdd08fe6fb28a92aa Oct 14 15:09:11 crc kubenswrapper[4860]: I1014 15:09:11.986781 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95d5-account-create-26t2x"] Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.113686 4860 generic.go:334] "Generic (PLEG): container finished" podID="51429dab-2e49-4ee4-8fcf-4ecd0070b5a5" containerID="0827df9788072d480b33a545b29a52af3504f7985ba9088314b186c414845895" exitCode=0 Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.113757 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6793-account-create-h8c74" event={"ID":"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5","Type":"ContainerDied","Data":"0827df9788072d480b33a545b29a52af3504f7985ba9088314b186c414845895"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.114264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6793-account-create-h8c74" event={"ID":"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5","Type":"ContainerStarted","Data":"e02a2d81168f13ae44fa588897f89b4a86fed3e5c9bc611f04f28b5d424ce894"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.117655 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95d5-account-create-26t2x" event={"ID":"48c09b2e-db9d-4989-9f7a-a0f94458d4d5","Type":"ContainerStarted","Data":"889666572d346dcc0dc48dc69d3110ebcefdefadc6efd29a1325573232514e14"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.119761 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9386-account-create-bbktn" event={"ID":"2552725f-0e4f-4766-b564-6c01b225d5d5","Type":"ContainerStarted","Data":"23b40a675aee4a461bb601653b5aa9d804f82ad109ed6ee85be1f640011fc8ff"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.119806 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9386-account-create-bbktn" event={"ID":"2552725f-0e4f-4766-b564-6c01b225d5d5","Type":"ContainerStarted","Data":"6748888563caaea6d49f4697cc0a44d05e3313a40a50145bdd08fe6fb28a92aa"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.122189 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"8f50dce10cccfce04d480cc293bcc9834ab75bfd146f9a6e990606f66e156183"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.122232 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"6d550f9d877e1067a6ee0d08a03b67102273e963f9a11b8a043d7f771c47a549"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.122243 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"2a673d8d5c84dd9b1f58ca33458f973435d452b59f1c29ac873f801aa0392989"} Oct 14 15:09:12 crc kubenswrapper[4860]: I1014 15:09:12.154198 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-9386-account-create-bbktn" podStartSLOduration=2.154173372 podStartE2EDuration="2.154173372s" podCreationTimestamp="2025-10-14 15:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:09:12.147194314 +0000 UTC m=+1213.733977773" watchObservedRunningTime="2025-10-14 15:09:12.154173372 +0000 UTC m=+1213.740956821" Oct 14 15:09:13 crc kubenswrapper[4860]: I1014 15:09:13.133660 4860 generic.go:334] "Generic (PLEG): container finished" podID="48c09b2e-db9d-4989-9f7a-a0f94458d4d5" containerID="a29ffdf3ee315aaef1e9be6f2e0e4822c67866303fb1387875c62b37ac9a5a32" exitCode=0 Oct 14 15:09:13 crc kubenswrapper[4860]: I1014 15:09:13.133998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95d5-account-create-26t2x" event={"ID":"48c09b2e-db9d-4989-9f7a-a0f94458d4d5","Type":"ContainerDied","Data":"a29ffdf3ee315aaef1e9be6f2e0e4822c67866303fb1387875c62b37ac9a5a32"} Oct 14 15:09:13 crc kubenswrapper[4860]: I1014 15:09:13.138579 4860 generic.go:334] "Generic (PLEG): container finished" podID="2552725f-0e4f-4766-b564-6c01b225d5d5" containerID="23b40a675aee4a461bb601653b5aa9d804f82ad109ed6ee85be1f640011fc8ff" exitCode=0 Oct 14 15:09:13 crc kubenswrapper[4860]: I1014 15:09:13.138664 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9386-account-create-bbktn" event={"ID":"2552725f-0e4f-4766-b564-6c01b225d5d5","Type":"ContainerDied","Data":"23b40a675aee4a461bb601653b5aa9d804f82ad109ed6ee85be1f640011fc8ff"} Oct 14 15:09:13 crc kubenswrapper[4860]: I1014 15:09:13.145985 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"56fc92dc9c662bcbdef81d1a175af934f709943d2f7a6a16d4ca836f7a90be3d"} Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.174288 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9386-account-create-bbktn" event={"ID":"2552725f-0e4f-4766-b564-6c01b225d5d5","Type":"ContainerDied","Data":"6748888563caaea6d49f4697cc0a44d05e3313a40a50145bdd08fe6fb28a92aa"} Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.174974 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6748888563caaea6d49f4697cc0a44d05e3313a40a50145bdd08fe6fb28a92aa" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.185495 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6793-account-create-h8c74" event={"ID":"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5","Type":"ContainerDied","Data":"e02a2d81168f13ae44fa588897f89b4a86fed3e5c9bc611f04f28b5d424ce894"} Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.185918 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e02a2d81168f13ae44fa588897f89b4a86fed3e5c9bc611f04f28b5d424ce894" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.197933 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95d5-account-create-26t2x" event={"ID":"48c09b2e-db9d-4989-9f7a-a0f94458d4d5","Type":"ContainerDied","Data":"889666572d346dcc0dc48dc69d3110ebcefdefadc6efd29a1325573232514e14"} Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.197985 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889666572d346dcc0dc48dc69d3110ebcefdefadc6efd29a1325573232514e14" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.299148 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.320727 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.334058 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.480827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6bb9\" (UniqueName: \"kubernetes.io/projected/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5-kube-api-access-s6bb9\") pod \"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5\" (UID: \"51429dab-2e49-4ee4-8fcf-4ecd0070b5a5\") " Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.480977 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frv49\" (UniqueName: \"kubernetes.io/projected/48c09b2e-db9d-4989-9f7a-a0f94458d4d5-kube-api-access-frv49\") pod \"48c09b2e-db9d-4989-9f7a-a0f94458d4d5\" (UID: \"48c09b2e-db9d-4989-9f7a-a0f94458d4d5\") " Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.481230 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmktq\" (UniqueName: \"kubernetes.io/projected/2552725f-0e4f-4766-b564-6c01b225d5d5-kube-api-access-zmktq\") pod \"2552725f-0e4f-4766-b564-6c01b225d5d5\" (UID: \"2552725f-0e4f-4766-b564-6c01b225d5d5\") " Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.485375 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5-kube-api-access-s6bb9" (OuterVolumeSpecName: "kube-api-access-s6bb9") pod "51429dab-2e49-4ee4-8fcf-4ecd0070b5a5" (UID: "51429dab-2e49-4ee4-8fcf-4ecd0070b5a5"). InnerVolumeSpecName "kube-api-access-s6bb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.485537 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2552725f-0e4f-4766-b564-6c01b225d5d5-kube-api-access-zmktq" (OuterVolumeSpecName: "kube-api-access-zmktq") pod "2552725f-0e4f-4766-b564-6c01b225d5d5" (UID: "2552725f-0e4f-4766-b564-6c01b225d5d5"). InnerVolumeSpecName "kube-api-access-zmktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.490394 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c09b2e-db9d-4989-9f7a-a0f94458d4d5-kube-api-access-frv49" (OuterVolumeSpecName: "kube-api-access-frv49") pod "48c09b2e-db9d-4989-9f7a-a0f94458d4d5" (UID: "48c09b2e-db9d-4989-9f7a-a0f94458d4d5"). InnerVolumeSpecName "kube-api-access-frv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.582687 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmktq\" (UniqueName: \"kubernetes.io/projected/2552725f-0e4f-4766-b564-6c01b225d5d5-kube-api-access-zmktq\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.582720 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6bb9\" (UniqueName: \"kubernetes.io/projected/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5-kube-api-access-s6bb9\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:16 crc kubenswrapper[4860]: I1014 15:09:16.582730 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frv49\" (UniqueName: \"kubernetes.io/projected/48c09b2e-db9d-4989-9f7a-a0f94458d4d5-kube-api-access-frv49\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:17 crc kubenswrapper[4860]: I1014 15:09:17.237234 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95d5-account-create-26t2x" Oct 14 15:09:17 crc kubenswrapper[4860]: I1014 15:09:17.238075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7dg8n" event={"ID":"ca190f5b-bd3e-4628-a10d-6b8de6e826d8","Type":"ContainerStarted","Data":"9ff7523dc98d437b2317e74a7a32f9172982d2d775eaef0c4ca7e9b6be523110"} Oct 14 15:09:17 crc kubenswrapper[4860]: I1014 15:09:17.238273 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6793-account-create-h8c74" Oct 14 15:09:17 crc kubenswrapper[4860]: I1014 15:09:17.239688 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9386-account-create-bbktn" Oct 14 15:09:17 crc kubenswrapper[4860]: I1014 15:09:17.263078 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7dg8n" podStartSLOduration=7.650394443 podStartE2EDuration="14.263058402s" podCreationTimestamp="2025-10-14 15:09:03 +0000 UTC" firstStartedPulling="2025-10-14 15:09:09.517422298 +0000 UTC m=+1211.104205747" lastFinishedPulling="2025-10-14 15:09:16.130086257 +0000 UTC m=+1217.716869706" observedRunningTime="2025-10-14 15:09:17.256103794 +0000 UTC m=+1218.842887253" watchObservedRunningTime="2025-10-14 15:09:17.263058402 +0000 UTC m=+1218.849841861" Oct 14 15:09:20 crc kubenswrapper[4860]: I1014 15:09:20.267667 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"ed552158b51803baa0111b6bac8f15145bed87fd6840091cccbf57daebf9e283"} Oct 14 15:09:20 crc kubenswrapper[4860]: I1014 15:09:20.268204 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"76d17f4c5a23dd6ed7dfb4fa6d296cb9271d11bc03403ceb0fd4537ecbf6cd9f"} Oct 14 15:09:21 crc kubenswrapper[4860]: I1014 15:09:21.298046 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"7caa96754b674948e2e862ee195980a4bbab2a11f3e4bf4c784de9067fd37f24"} Oct 14 15:09:21 crc kubenswrapper[4860]: I1014 15:09:21.298383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"18fbcb11805999ec3802c6187ce8caa2a019ee20b696b0b2fae33e24ad3bd781"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.327845 4860 generic.go:334] "Generic (PLEG): container finished" podID="ca190f5b-bd3e-4628-a10d-6b8de6e826d8" containerID="9ff7523dc98d437b2317e74a7a32f9172982d2d775eaef0c4ca7e9b6be523110" exitCode=0 Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.328062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7dg8n" event={"ID":"ca190f5b-bd3e-4628-a10d-6b8de6e826d8","Type":"ContainerDied","Data":"9ff7523dc98d437b2317e74a7a32f9172982d2d775eaef0c4ca7e9b6be523110"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.339666 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"6ce994fd2b0c40083b0444d13d4f694e58c46b2f7bc8b4fb3e3c4bcd6411cb1f"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.339707 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"0accaa074f1b06bb29da92932ca7277700ea6e538b7a0eb259dc779baefbb118"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.339728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"6aaef4b644b3c866b74a6c31627a0b684483cc170462635025f6d448e3128d8c"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.339741 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"7bb3eae4843cc03819c982bd310fd9283f61f5c1c597c273afdf03173275f7d6"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.339754 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"9e52417af887a4bfb748c74e315f75f4360fef67caae678aba50437c8d71bbae"} Oct 14 15:09:23 crc kubenswrapper[4860]: I1014 15:09:23.339764 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"a79c380d389e5874b4b001daed17eccd4bf8308d49ff012b4798e7e81b46b69f"} Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.356444 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a","Type":"ContainerStarted","Data":"10dca8702c09ce1ba6b1d95a0824f41965ac65771e6df998aa248d8d0b3f660f"} Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.378225 4860 generic.go:334] "Generic (PLEG): container finished" podID="06b62797-8a97-4db0-a6ca-e7b2172ddb78" containerID="0898e642f989278d78317aeabfc64b13728aaf1b34251fdc3e2493d641c4b355" exitCode=0 Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.378303 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b8tzr" event={"ID":"06b62797-8a97-4db0-a6ca-e7b2172ddb78","Type":"ContainerDied","Data":"0898e642f989278d78317aeabfc64b13728aaf1b34251fdc3e2493d641c4b355"} Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.413114 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.793736947 podStartE2EDuration="49.413095765s" podCreationTimestamp="2025-10-14 15:08:35 +0000 UTC" firstStartedPulling="2025-10-14 15:09:09.569738512 +0000 UTC m=+1211.156521971" lastFinishedPulling="2025-10-14 15:09:22.18909733 +0000 UTC m=+1223.775880789" observedRunningTime="2025-10-14 15:09:24.4112238 +0000 UTC m=+1225.998007269" watchObservedRunningTime="2025-10-14 15:09:24.413095765 +0000 UTC m=+1225.999879224" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.717228 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-q5pgz"] Oct 14 15:09:24 crc kubenswrapper[4860]: E1014 15:09:24.717839 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c09b2e-db9d-4989-9f7a-a0f94458d4d5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.717855 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c09b2e-db9d-4989-9f7a-a0f94458d4d5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: E1014 15:09:24.717867 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51429dab-2e49-4ee4-8fcf-4ecd0070b5a5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.717874 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="51429dab-2e49-4ee4-8fcf-4ecd0070b5a5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: E1014 15:09:24.717892 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2552725f-0e4f-4766-b564-6c01b225d5d5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.717898 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2552725f-0e4f-4766-b564-6c01b225d5d5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.718077 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="51429dab-2e49-4ee4-8fcf-4ecd0070b5a5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.718103 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2552725f-0e4f-4766-b564-6c01b225d5d5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.718115 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c09b2e-db9d-4989-9f7a-a0f94458d4d5" containerName="mariadb-account-create" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.718935 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.723431 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.729504 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.744528 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-q5pgz"] Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.824840 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh5b6\" (UniqueName: \"kubernetes.io/projected/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-kube-api-access-fh5b6\") pod \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.824944 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-combined-ca-bundle\") pod \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825078 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-config-data\") pod \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\" (UID: \"ca190f5b-bd3e-4628-a10d-6b8de6e826d8\") " Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825327 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825376 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-svc\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825477 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qjdp\" (UniqueName: \"kubernetes.io/projected/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-kube-api-access-6qjdp\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825510 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825551 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-config\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.825611 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.829577 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-kube-api-access-fh5b6" (OuterVolumeSpecName: "kube-api-access-fh5b6") pod "ca190f5b-bd3e-4628-a10d-6b8de6e826d8" (UID: "ca190f5b-bd3e-4628-a10d-6b8de6e826d8"). InnerVolumeSpecName "kube-api-access-fh5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.852713 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca190f5b-bd3e-4628-a10d-6b8de6e826d8" (UID: "ca190f5b-bd3e-4628-a10d-6b8de6e826d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.879211 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-config-data" (OuterVolumeSpecName: "config-data") pod "ca190f5b-bd3e-4628-a10d-6b8de6e826d8" (UID: "ca190f5b-bd3e-4628-a10d-6b8de6e826d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.926951 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927009 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927049 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-svc\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927119 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qjdp\" (UniqueName: \"kubernetes.io/projected/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-kube-api-access-6qjdp\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927137 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927168 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-config\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927224 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh5b6\" (UniqueName: \"kubernetes.io/projected/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-kube-api-access-fh5b6\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927234 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927243 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca190f5b-bd3e-4628-a10d-6b8de6e826d8-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.927965 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-config\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.928509 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.928996 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.929533 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-svc\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.931877 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:24 crc kubenswrapper[4860]: I1014 15:09:24.944473 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qjdp\" (UniqueName: \"kubernetes.io/projected/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-kube-api-access-6qjdp\") pod \"dnsmasq-dns-764c5664d7-q5pgz\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.049406 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.391306 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7dg8n" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.391291 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7dg8n" event={"ID":"ca190f5b-bd3e-4628-a10d-6b8de6e826d8","Type":"ContainerDied","Data":"5647649ce5ebdb9079b0bcfe41ab92cc2620a73d368f37513de53a813c79d7ca"} Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.392470 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5647649ce5ebdb9079b0bcfe41ab92cc2620a73d368f37513de53a813c79d7ca" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.567571 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-q5pgz"] Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.752934 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w74kc"] Oct 14 15:09:25 crc kubenswrapper[4860]: E1014 15:09:25.753590 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca190f5b-bd3e-4628-a10d-6b8de6e826d8" containerName="keystone-db-sync" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.753606 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca190f5b-bd3e-4628-a10d-6b8de6e826d8" containerName="keystone-db-sync" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.753794 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca190f5b-bd3e-4628-a10d-6b8de6e826d8" containerName="keystone-db-sync" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.754313 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.784564 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.784805 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.784994 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xsgm5" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.785186 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.827846 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w74kc"] Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.879487 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-config-data\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.879538 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-combined-ca-bundle\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.879590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-fernet-keys\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.879639 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-credential-keys\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.879721 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b66cb\" (UniqueName: \"kubernetes.io/projected/7b8c3dcb-4c41-43bd-852a-ad86946b1124-kube-api-access-b66cb\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.879742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-scripts\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.911767 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-q5pgz"] Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.984201 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b66cb\" (UniqueName: \"kubernetes.io/projected/7b8c3dcb-4c41-43bd-852a-ad86946b1124-kube-api-access-b66cb\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.984245 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-scripts\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.984299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-config-data\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.984318 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-combined-ca-bundle\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.984375 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-fernet-keys\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.984413 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-credential-keys\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.988470 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-scripts\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:25 crc kubenswrapper[4860]: I1014 15:09:25.997766 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-combined-ca-bundle\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.001622 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-credential-keys\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.003056 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-config-data\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.003591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-fernet-keys\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.015712 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b8tzr" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.053926 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-25qk2"] Oct 14 15:09:26 crc kubenswrapper[4860]: E1014 15:09:26.054301 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b62797-8a97-4db0-a6ca-e7b2172ddb78" containerName="glance-db-sync" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.054317 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b62797-8a97-4db0-a6ca-e7b2172ddb78" containerName="glance-db-sync" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.054484 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b62797-8a97-4db0-a6ca-e7b2172ddb78" containerName="glance-db-sync" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.055306 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.060739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b66cb\" (UniqueName: \"kubernetes.io/projected/7b8c3dcb-4c41-43bd-852a-ad86946b1124-kube-api-access-b66cb\") pod \"keystone-bootstrap-w74kc\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.127728 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-25qk2"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.193512 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-db-sync-config-data\") pod \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.193589 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-combined-ca-bundle\") pod \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.193703 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-config-data\") pod \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.193803 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zpl4\" (UniqueName: \"kubernetes.io/projected/06b62797-8a97-4db0-a6ca-e7b2172ddb78-kube-api-access-4zpl4\") pod \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\" (UID: \"06b62797-8a97-4db0-a6ca-e7b2172ddb78\") " Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.193978 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.194047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-config\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.194071 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.194093 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snx68\" (UniqueName: \"kubernetes.io/projected/ea67bc14-5a08-414d-9951-dbcb24dc99a4-kube-api-access-snx68\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.194119 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-svc\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.194146 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.204193 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "06b62797-8a97-4db0-a6ca-e7b2172ddb78" (UID: "06b62797-8a97-4db0-a6ca-e7b2172ddb78"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.205137 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.208546 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-grpb9"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.225390 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b62797-8a97-4db0-a6ca-e7b2172ddb78-kube-api-access-4zpl4" (OuterVolumeSpecName: "kube-api-access-4zpl4") pod "06b62797-8a97-4db0-a6ca-e7b2172ddb78" (UID: "06b62797-8a97-4db0-a6ca-e7b2172ddb78"). InnerVolumeSpecName "kube-api-access-4zpl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.225687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.230042 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.249437 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-grpb9"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.257464 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.257675 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hvpvl" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.286332 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06b62797-8a97-4db0-a6ca-e7b2172ddb78" (UID: "06b62797-8a97-4db0-a6ca-e7b2172ddb78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296268 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296496 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-config\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296564 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296633 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snx68\" (UniqueName: \"kubernetes.io/projected/ea67bc14-5a08-414d-9951-dbcb24dc99a4-kube-api-access-snx68\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-svc\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296799 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296921 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.296973 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zpl4\" (UniqueName: \"kubernetes.io/projected/06b62797-8a97-4db0-a6ca-e7b2172ddb78-kube-api-access-4zpl4\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.297039 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.297820 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.298429 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.298794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.299383 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-config\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.299871 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-svc\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.302070 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6466c9b897-b8tk5"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.324856 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6466c9b897-b8tk5"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.325175 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.362059 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.362144 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.362284 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.362395 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-z54cb" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.367418 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-config-data" (OuterVolumeSpecName: "config-data") pod "06b62797-8a97-4db0-a6ca-e7b2172ddb78" (UID: "06b62797-8a97-4db0-a6ca-e7b2172ddb78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.385709 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snx68\" (UniqueName: \"kubernetes.io/projected/ea67bc14-5a08-414d-9951-dbcb24dc99a4-kube-api-access-snx68\") pod \"dnsmasq-dns-5959f8865f-25qk2\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402079 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-combined-ca-bundle\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402149 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-scripts\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402205 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfcq2\" (UniqueName: \"kubernetes.io/projected/ca080412-b618-4293-a06d-e0d9a774d36b-kube-api-access-rfcq2\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402241 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-config-data\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402276 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca080412-b618-4293-a06d-e0d9a774d36b-etc-machine-id\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-db-sync-config-data\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.402456 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b62797-8a97-4db0-a6ca-e7b2172ddb78-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.444874 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dhd74"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.445791 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b8tzr" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.445809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b8tzr" event={"ID":"06b62797-8a97-4db0-a6ca-e7b2172ddb78","Type":"ContainerDied","Data":"509354b4759c86555bd7278af4c261ce3f7647b7a3a4389e941cd4e416cfff54"} Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.445831 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509354b4759c86555bd7278af4c261ce3f7647b7a3a4389e941cd4e416cfff54" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.446466 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.459926 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" event={"ID":"76fdb864-1fd8-4ca3-ab7a-3925084b3a21","Type":"ContainerStarted","Data":"b0fca893a0e18cc4845e449375731e7ec9837456ba9b03b02241d6bf8f1a7293"} Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.486919 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.495339 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.495574 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7nlmd" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.503887 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d50105c7-28e1-401b-8447-715e9749be1a-horizon-secret-key\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.503933 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfcq2\" (UniqueName: \"kubernetes.io/projected/ca080412-b618-4293-a06d-e0d9a774d36b-kube-api-access-rfcq2\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.503970 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-config-data\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504003 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca080412-b618-4293-a06d-e0d9a774d36b-etc-machine-id\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504054 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs7gt\" (UniqueName: \"kubernetes.io/projected/d50105c7-28e1-401b-8447-715e9749be1a-kube-api-access-qs7gt\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504098 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-db-sync-config-data\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504143 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-scripts\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504188 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-config-data\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504230 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-combined-ca-bundle\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504259 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50105c7-28e1-401b-8447-715e9749be1a-logs\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.504299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-scripts\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.530176 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca080412-b618-4293-a06d-e0d9a774d36b-etc-machine-id\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.539857 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dhd74"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.554387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-combined-ca-bundle\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.556349 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-scripts\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.563702 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-db-sync-config-data\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.595264 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfcq2\" (UniqueName: \"kubernetes.io/projected/ca080412-b618-4293-a06d-e0d9a774d36b-kube-api-access-rfcq2\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.614502 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50105c7-28e1-401b-8447-715e9749be1a-logs\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.614777 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jhg\" (UniqueName: \"kubernetes.io/projected/c63dca02-9db5-41e7-90a0-0c19bd729242-kube-api-access-b8jhg\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.615776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50105c7-28e1-401b-8447-715e9749be1a-logs\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.626221 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d50105c7-28e1-401b-8447-715e9749be1a-horizon-secret-key\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.626303 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-config\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.626427 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs7gt\" (UniqueName: \"kubernetes.io/projected/d50105c7-28e1-401b-8447-715e9749be1a-kube-api-access-qs7gt\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.626531 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-scripts\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.626557 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-combined-ca-bundle\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.626647 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-config-data\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.627840 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-config-data\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.628552 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-scripts\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.643502 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d50105c7-28e1-401b-8447-715e9749be1a-horizon-secret-key\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.645493 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-config-data\") pod \"cinder-db-sync-grpb9\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.689931 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.742894 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8jhg\" (UniqueName: \"kubernetes.io/projected/c63dca02-9db5-41e7-90a0-0c19bd729242-kube-api-access-b8jhg\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.742954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-config\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.743055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-combined-ca-bundle\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.756593 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-combined-ca-bundle\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.764613 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dbsq9"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.777005 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.797995 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-config\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.805478 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.805775 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.805895 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-f5hh4" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.809585 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs7gt\" (UniqueName: \"kubernetes.io/projected/d50105c7-28e1-401b-8447-715e9749be1a-kube-api-access-qs7gt\") pod \"horizon-6466c9b897-b8tk5\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.821771 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84b975bf87-4qg2x"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.823202 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.839661 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8jhg\" (UniqueName: \"kubernetes.io/projected/c63dca02-9db5-41e7-90a0-0c19bd729242-kube-api-access-b8jhg\") pod \"neutron-db-sync-dhd74\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.847114 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x2247"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.848567 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.849169 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3324c4e1-abc6-473d-8d14-28d41a4e27a8-logs\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.849210 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-scripts\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.849292 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-combined-ca-bundle\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.849323 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vs4p\" (UniqueName: \"kubernetes.io/projected/3324c4e1-abc6-473d-8d14-28d41a4e27a8-kube-api-access-6vs4p\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.849355 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-config-data\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.852695 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dbsq9"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.856484 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dhd74" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.863713 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grpb9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.873603 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8h6vr" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.874346 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.891993 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84b975bf87-4qg2x"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.931087 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x2247"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.951826 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rf9\" (UniqueName: \"kubernetes.io/projected/f0a3bc02-1357-4751-9496-a41526515867-kube-api-access-79rf9\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.951884 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-scripts\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.951917 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-db-sync-config-data\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.951933 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-config-data\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952000 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b08c17e-22a5-4238-a9df-3efc1ae5f335-horizon-secret-key\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952046 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpt5f\" (UniqueName: \"kubernetes.io/projected/7b08c17e-22a5-4238-a9df-3efc1ae5f335-kube-api-access-tpt5f\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952082 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-combined-ca-bundle\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952111 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-scripts\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952137 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vs4p\" (UniqueName: \"kubernetes.io/projected/3324c4e1-abc6-473d-8d14-28d41a4e27a8-kube-api-access-6vs4p\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952155 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b08c17e-22a5-4238-a9df-3efc1ae5f335-logs\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952181 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-combined-ca-bundle\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952221 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-config-data\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3324c4e1-abc6-473d-8d14-28d41a4e27a8-logs\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.952665 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3324c4e1-abc6-473d-8d14-28d41a4e27a8-logs\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.958700 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-combined-ca-bundle\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.961637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-config-data\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.965448 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-scripts\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.979259 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-25qk2"] Oct 14 15:09:26 crc kubenswrapper[4860]: I1014 15:09:26.987384 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.028716 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vs4p\" (UniqueName: \"kubernetes.io/projected/3324c4e1-abc6-473d-8d14-28d41a4e27a8-kube-api-access-6vs4p\") pod \"placement-db-sync-dbsq9\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.042600 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w74kc"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.069685 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpt5f\" (UniqueName: \"kubernetes.io/projected/7b08c17e-22a5-4238-a9df-3efc1ae5f335-kube-api-access-tpt5f\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.086743 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-scripts\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.086821 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b08c17e-22a5-4238-a9df-3efc1ae5f335-logs\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.086871 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-combined-ca-bundle\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.087008 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79rf9\" (UniqueName: \"kubernetes.io/projected/f0a3bc02-1357-4751-9496-a41526515867-kube-api-access-79rf9\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.087113 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-db-sync-config-data\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.087139 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-config-data\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.087224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b08c17e-22a5-4238-a9df-3efc1ae5f335-horizon-secret-key\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.110993 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.112975 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.113310 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-scripts\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.113990 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b08c17e-22a5-4238-a9df-3efc1ae5f335-logs\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.114477 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b08c17e-22a5-4238-a9df-3efc1ae5f335-horizon-secret-key\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.116823 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.117937 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-config-data\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.118311 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-db-sync-config-data\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.120827 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cw77m"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.122207 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.137554 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.142674 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-combined-ca-bundle\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.143815 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpt5f\" (UniqueName: \"kubernetes.io/projected/7b08c17e-22a5-4238-a9df-3efc1ae5f335-kube-api-access-tpt5f\") pod \"horizon-84b975bf87-4qg2x\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.144212 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.153305 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cw77m"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.178494 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbsq9" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.186925 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79rf9\" (UniqueName: \"kubernetes.io/projected/f0a3bc02-1357-4751-9496-a41526515867-kube-api-access-79rf9\") pod \"barbican-db-sync-x2247\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.189943 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-run-httpd\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.190016 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-log-httpd\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.190055 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.190132 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv95m\" (UniqueName: \"kubernetes.io/projected/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-kube-api-access-qv95m\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.190240 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-config-data\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.190267 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.190308 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-scripts\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.202933 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.222265 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2247" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298100 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298179 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-config-data\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298203 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrtc\" (UniqueName: \"kubernetes.io/projected/11257192-cef9-4afe-89a3-4b9d60a9914c-kube-api-access-hkrtc\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298222 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298240 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-config\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298284 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-scripts\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298316 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-run-httpd\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-log-httpd\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298363 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298416 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.298454 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv95m\" (UniqueName: \"kubernetes.io/projected/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-kube-api-access-qv95m\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.302678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-run-httpd\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.302770 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-log-httpd\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.345167 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-config-data\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.349010 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.349846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-scripts\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.352169 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.352638 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv95m\" (UniqueName: \"kubernetes.io/projected/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-kube-api-access-qv95m\") pod \"ceilometer-0\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.363931 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cw77m"] Oct 14 15:09:27 crc kubenswrapper[4860]: E1014 15:09:27.365610 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-hkrtc ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" podUID="11257192-cef9-4afe-89a3-4b9d60a9914c" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.404089 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.404164 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrtc\" (UniqueName: \"kubernetes.io/projected/11257192-cef9-4afe-89a3-4b9d60a9914c-kube-api-access-hkrtc\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.404182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.404227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-config\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.404296 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.404315 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.405252 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.405860 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-config\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.406350 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.406410 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.407093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.435090 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x4p9t"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.436580 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.461777 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrtc\" (UniqueName: \"kubernetes.io/projected/11257192-cef9-4afe-89a3-4b9d60a9914c-kube-api-access-hkrtc\") pod \"dnsmasq-dns-58dd9ff6bc-cw77m\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.465891 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x4p9t"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.508579 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.509927 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-config\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.514296 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.514446 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.514579 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtc6q\" (UniqueName: \"kubernetes.io/projected/6252f636-188e-4b89-8092-3ea73fe73fbe-kube-api-access-vtc6q\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.514732 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.514810 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.533071 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w74kc" event={"ID":"7b8c3dcb-4c41-43bd-852a-ad86946b1124","Type":"ContainerStarted","Data":"189e8b06495de739e3fedb0c7ff034eedd2dfd61cb8f2989a0a6541e47e8dcf2"} Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.559447 4860 generic.go:334] "Generic (PLEG): container finished" podID="76fdb864-1fd8-4ca3-ab7a-3925084b3a21" containerID="c30d38acf2d80ad97e8e58fb2db53065cb8f3958dec558c2542e96a85485c9d5" exitCode=0 Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.559778 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.564118 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" event={"ID":"76fdb864-1fd8-4ca3-ab7a-3925084b3a21","Type":"ContainerDied","Data":"c30d38acf2d80ad97e8e58fb2db53065cb8f3958dec558c2542e96a85485c9d5"} Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.618924 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.619426 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-config\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.619462 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.619505 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.619546 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtc6q\" (UniqueName: \"kubernetes.io/projected/6252f636-188e-4b89-8092-3ea73fe73fbe-kube-api-access-vtc6q\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.619592 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.619614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.621304 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.624722 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.625073 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.630532 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.630549 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-config\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.669574 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtc6q\" (UniqueName: \"kubernetes.io/projected/6252f636-188e-4b89-8092-3ea73fe73fbe-kube-api-access-vtc6q\") pod \"dnsmasq-dns-785d8bcb8c-x4p9t\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.729477 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-config\") pod \"11257192-cef9-4afe-89a3-4b9d60a9914c\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.729585 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-svc\") pod \"11257192-cef9-4afe-89a3-4b9d60a9914c\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.729630 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-sb\") pod \"11257192-cef9-4afe-89a3-4b9d60a9914c\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.729682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-nb\") pod \"11257192-cef9-4afe-89a3-4b9d60a9914c\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.729738 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrtc\" (UniqueName: \"kubernetes.io/projected/11257192-cef9-4afe-89a3-4b9d60a9914c-kube-api-access-hkrtc\") pod \"11257192-cef9-4afe-89a3-4b9d60a9914c\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.729773 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-swift-storage-0\") pod \"11257192-cef9-4afe-89a3-4b9d60a9914c\" (UID: \"11257192-cef9-4afe-89a3-4b9d60a9914c\") " Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.730501 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "11257192-cef9-4afe-89a3-4b9d60a9914c" (UID: "11257192-cef9-4afe-89a3-4b9d60a9914c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.730855 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-config" (OuterVolumeSpecName: "config") pod "11257192-cef9-4afe-89a3-4b9d60a9914c" (UID: "11257192-cef9-4afe-89a3-4b9d60a9914c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.731224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11257192-cef9-4afe-89a3-4b9d60a9914c" (UID: "11257192-cef9-4afe-89a3-4b9d60a9914c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.731534 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11257192-cef9-4afe-89a3-4b9d60a9914c" (UID: "11257192-cef9-4afe-89a3-4b9d60a9914c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.736887 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11257192-cef9-4afe-89a3-4b9d60a9914c" (UID: "11257192-cef9-4afe-89a3-4b9d60a9914c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.744168 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11257192-cef9-4afe-89a3-4b9d60a9914c-kube-api-access-hkrtc" (OuterVolumeSpecName: "kube-api-access-hkrtc") pod "11257192-cef9-4afe-89a3-4b9d60a9914c" (UID: "11257192-cef9-4afe-89a3-4b9d60a9914c"). InnerVolumeSpecName "kube-api-access-hkrtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.809457 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.827131 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-25qk2"] Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.839734 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.839775 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.839785 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.839794 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkrtc\" (UniqueName: \"kubernetes.io/projected/11257192-cef9-4afe-89a3-4b9d60a9914c-kube-api-access-hkrtc\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.839804 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:27 crc kubenswrapper[4860]: I1014 15:09:27.839813 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11257192-cef9-4afe-89a3-4b9d60a9914c-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.004784 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dhd74"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.432130 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.433705 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.434225 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.440728 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.441550 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.441576 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-g6hpd" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.462105 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6466c9b897-b8tk5"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.483638 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.495242 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-grpb9"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.566772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-sb\") pod \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.566908 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-nb\") pod \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.566951 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qjdp\" (UniqueName: \"kubernetes.io/projected/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-kube-api-access-6qjdp\") pod \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.566985 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-config\") pod \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-svc\") pod \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567052 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-swift-storage-0\") pod \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\" (UID: \"76fdb864-1fd8-4ca3-ab7a-3925084b3a21\") " Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567301 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567391 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhwjg\" (UniqueName: \"kubernetes.io/projected/4a56ac53-1e28-4f84-a63c-373c1159ea14-kube-api-access-fhwjg\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567430 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567687 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567723 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-logs\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567752 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-scripts\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.567779 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-config-data\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.574230 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-kube-api-access-6qjdp" (OuterVolumeSpecName: "kube-api-access-6qjdp") pod "76fdb864-1fd8-4ca3-ab7a-3925084b3a21" (UID: "76fdb864-1fd8-4ca3-ab7a-3925084b3a21"). InnerVolumeSpecName "kube-api-access-6qjdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.612296 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76fdb864-1fd8-4ca3-ab7a-3925084b3a21" (UID: "76fdb864-1fd8-4ca3-ab7a-3925084b3a21"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.616373 4860 generic.go:334] "Generic (PLEG): container finished" podID="ea67bc14-5a08-414d-9951-dbcb24dc99a4" containerID="ca8e83a59ddb781fe5bbdfe52be3d4d4272a565173cb2806290b228f13f950ef" exitCode=0 Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.616561 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" event={"ID":"ea67bc14-5a08-414d-9951-dbcb24dc99a4","Type":"ContainerDied","Data":"ca8e83a59ddb781fe5bbdfe52be3d4d4272a565173cb2806290b228f13f950ef"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.616591 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" event={"ID":"ea67bc14-5a08-414d-9951-dbcb24dc99a4","Type":"ContainerStarted","Data":"0672bac95d594c645f930e0895331f8c8f2c0c3816d1bd575198a3d7950cc618"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.617067 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:28 crc kubenswrapper[4860]: E1014 15:09:28.617479 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fdb864-1fd8-4ca3-ab7a-3925084b3a21" containerName="init" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.617554 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fdb864-1fd8-4ca3-ab7a-3925084b3a21" containerName="init" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.617797 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fdb864-1fd8-4ca3-ab7a-3925084b3a21" containerName="init" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.673258 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.673776 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6466c9b897-b8tk5" event={"ID":"d50105c7-28e1-401b-8447-715e9749be1a","Type":"ContainerStarted","Data":"d945d8021065378a88f302b824ee79927b240d1cbb37a5762bfe6523642f398b"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.674000 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.677364 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.677817 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76fdb864-1fd8-4ca3-ab7a-3925084b3a21" (UID: "76fdb864-1fd8-4ca3-ab7a-3925084b3a21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.678389 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.679509 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhwjg\" (UniqueName: \"kubernetes.io/projected/4a56ac53-1e28-4f84-a63c-373c1159ea14-kube-api-access-fhwjg\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.679788 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.680092 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.680252 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.680529 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-logs\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.681641 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.681908 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-logs\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.681963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-scripts\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.682056 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-config-data\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.691322 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76fdb864-1fd8-4ca3-ab7a-3925084b3a21" (UID: "76fdb864-1fd8-4ca3-ab7a-3925084b3a21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.695263 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qjdp\" (UniqueName: \"kubernetes.io/projected/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-kube-api-access-6qjdp\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.695295 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.695304 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.695316 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.700597 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-config" (OuterVolumeSpecName: "config") pod "76fdb864-1fd8-4ca3-ab7a-3925084b3a21" (UID: "76fdb864-1fd8-4ca3-ab7a-3925084b3a21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.700892 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w74kc" event={"ID":"7b8c3dcb-4c41-43bd-852a-ad86946b1124","Type":"ContainerStarted","Data":"88a76723e02d8de3fc034bee165c642e009b34660ef7316fa335fab79b9b9a10"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.724248 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76fdb864-1fd8-4ca3-ab7a-3925084b3a21" (UID: "76fdb864-1fd8-4ca3-ab7a-3925084b3a21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.736086 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhwjg\" (UniqueName: \"kubernetes.io/projected/4a56ac53-1e28-4f84-a63c-373c1159ea14-kube-api-access-fhwjg\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.738969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.754620 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-scripts\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.755741 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" event={"ID":"76fdb864-1fd8-4ca3-ab7a-3925084b3a21","Type":"ContainerDied","Data":"b0fca893a0e18cc4845e449375731e7ec9837456ba9b03b02241d6bf8f1a7293"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.755815 4860 scope.go:117] "RemoveContainer" containerID="c30d38acf2d80ad97e8e58fb2db53065cb8f3958dec558c2542e96a85485c9d5" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.755987 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-q5pgz" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.756671 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-config-data\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.785075 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w74kc" podStartSLOduration=3.785055387 podStartE2EDuration="3.785055387s" podCreationTimestamp="2025-10-14 15:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:09:28.745987382 +0000 UTC m=+1230.332770831" watchObservedRunningTime="2025-10-14 15:09:28.785055387 +0000 UTC m=+1230.371838826" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.786894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grpb9" event={"ID":"ca080412-b618-4293-a06d-e0d9a774d36b","Type":"ContainerStarted","Data":"589a40560d36b475c5283dc2e77e3f33ae4b569301e96636cd6c2e1c2a9458fc"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.801839 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.805020 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.805206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.805274 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncbm5\" (UniqueName: \"kubernetes.io/projected/dea06d4a-3963-4919-9671-ed906749cdd3-kube-api-access-ncbm5\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.805308 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.805353 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.806389 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.806475 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.806492 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fdb864-1fd8-4ca3-ab7a-3925084b3a21-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.833102 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cw77m" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.836193 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dhd74" event={"ID":"c63dca02-9db5-41e7-90a0-0c19bd729242","Type":"ContainerStarted","Data":"954bc4d1818bf622ee8a06144a3b48f2323a934fd95d9db7376cc47b6cd2988a"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.836218 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dhd74" event={"ID":"c63dca02-9db5-41e7-90a0-0c19bd729242","Type":"ContainerStarted","Data":"d52c990c19b00f963efa7c0aa2a8b4b9552db80fb9340f1ce617e91e9a787773"} Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.872333 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.931850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.931907 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncbm5\" (UniqueName: \"kubernetes.io/projected/dea06d4a-3963-4919-9671-ed906749cdd3-kube-api-access-ncbm5\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.931936 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.931960 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.932017 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.932086 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.932141 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.932664 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.932781 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.934628 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x2247"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.936188 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.941455 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.967612 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.976297 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.980533 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncbm5\" (UniqueName: \"kubernetes.io/projected/dea06d4a-3963-4919-9671-ed906749cdd3-kube-api-access-ncbm5\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:28 crc kubenswrapper[4860]: I1014 15:09:28.982728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.022069 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dbsq9"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.060962 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.064174 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dhd74" podStartSLOduration=3.064156275 podStartE2EDuration="3.064156275s" podCreationTimestamp="2025-10-14 15:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:09:28.934565182 +0000 UTC m=+1230.521348631" watchObservedRunningTime="2025-10-14 15:09:29.064156275 +0000 UTC m=+1230.650939724" Oct 14 15:09:29 crc kubenswrapper[4860]: W1014 15:09:29.065997 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3324c4e1_abc6_473d_8d14_28d41a4e27a8.slice/crio-fa9f43088f302010bb9e2b081232f6b597b6569796f3ceff14b82227fb242d9e WatchSource:0}: Error finding container fa9f43088f302010bb9e2b081232f6b597b6569796f3ceff14b82227fb242d9e: Status 404 returned error can't find the container with id fa9f43088f302010bb9e2b081232f6b597b6569796f3ceff14b82227fb242d9e Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.067654 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.138437 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84b975bf87-4qg2x"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.138674 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x4p9t"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.142520 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-q5pgz"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.153582 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-q5pgz"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.167781 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cw77m"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.178371 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cw77m"] Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.253368 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.253430 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.316223 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.411239 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.575765 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-nb\") pod \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.576257 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-swift-storage-0\") pod \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.576308 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snx68\" (UniqueName: \"kubernetes.io/projected/ea67bc14-5a08-414d-9951-dbcb24dc99a4-kube-api-access-snx68\") pod \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.576383 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-svc\") pod \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.576533 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-config\") pod \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.576568 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-sb\") pod \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\" (UID: \"ea67bc14-5a08-414d-9951-dbcb24dc99a4\") " Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.598660 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea67bc14-5a08-414d-9951-dbcb24dc99a4-kube-api-access-snx68" (OuterVolumeSpecName: "kube-api-access-snx68") pod "ea67bc14-5a08-414d-9951-dbcb24dc99a4" (UID: "ea67bc14-5a08-414d-9951-dbcb24dc99a4"). InnerVolumeSpecName "kube-api-access-snx68". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.648177 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-config" (OuterVolumeSpecName: "config") pod "ea67bc14-5a08-414d-9951-dbcb24dc99a4" (UID: "ea67bc14-5a08-414d-9951-dbcb24dc99a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.669932 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea67bc14-5a08-414d-9951-dbcb24dc99a4" (UID: "ea67bc14-5a08-414d-9951-dbcb24dc99a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.678195 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snx68\" (UniqueName: \"kubernetes.io/projected/ea67bc14-5a08-414d-9951-dbcb24dc99a4-kube-api-access-snx68\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.678213 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.678221 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.689386 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea67bc14-5a08-414d-9951-dbcb24dc99a4" (UID: "ea67bc14-5a08-414d-9951-dbcb24dc99a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.756164 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea67bc14-5a08-414d-9951-dbcb24dc99a4" (UID: "ea67bc14-5a08-414d-9951-dbcb24dc99a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.757933 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea67bc14-5a08-414d-9951-dbcb24dc99a4" (UID: "ea67bc14-5a08-414d-9951-dbcb24dc99a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.779482 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.779507 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.779518 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea67bc14-5a08-414d-9951-dbcb24dc99a4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.930482 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" event={"ID":"ea67bc14-5a08-414d-9951-dbcb24dc99a4","Type":"ContainerDied","Data":"0672bac95d594c645f930e0895331f8c8f2c0c3816d1bd575198a3d7950cc618"} Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.930820 4860 scope.go:117] "RemoveContainer" containerID="ca8e83a59ddb781fe5bbdfe52be3d4d4272a565173cb2806290b228f13f950ef" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.930518 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-25qk2" Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.942087 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b84e0757-6fba-44cd-a37d-0e7c06eab0e4","Type":"ContainerStarted","Data":"bc04e7d8a088bdfc6fef0b96e20eba16d949562021fc648ade4495125c166315"} Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.943402 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2247" event={"ID":"f0a3bc02-1357-4751-9496-a41526515867","Type":"ContainerStarted","Data":"b6f0871a5cf1723397d56635c1f9b2b4683c642398ce15541c40fba9d5cb7e39"} Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.948406 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" event={"ID":"6252f636-188e-4b89-8092-3ea73fe73fbe","Type":"ContainerStarted","Data":"81bbd94e813074cd6372482d49cfb7712e3e769a9ffda1e7ae07a351c45ed2ba"} Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.971422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b975bf87-4qg2x" event={"ID":"7b08c17e-22a5-4238-a9df-3efc1ae5f335","Type":"ContainerStarted","Data":"21a500bc89f0a1050debff8437ec84ee5fda6b79df6ba9ab1725c727f76968a0"} Oct 14 15:09:29 crc kubenswrapper[4860]: I1014 15:09:29.993115 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbsq9" event={"ID":"3324c4e1-abc6-473d-8d14-28d41a4e27a8","Type":"ContainerStarted","Data":"fa9f43088f302010bb9e2b081232f6b597b6569796f3ceff14b82227fb242d9e"} Oct 14 15:09:30 crc kubenswrapper[4860]: I1014 15:09:30.038357 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-25qk2"] Oct 14 15:09:30 crc kubenswrapper[4860]: I1014 15:09:30.046006 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-25qk2"] Oct 14 15:09:30 crc kubenswrapper[4860]: I1014 15:09:30.305405 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:30 crc kubenswrapper[4860]: W1014 15:09:30.336288 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a56ac53_1e28_4f84_a63c_373c1159ea14.slice/crio-1cfa241aa991d310bdd1859a6d83f6d99cd978612698791052dcb485c15ab590 WatchSource:0}: Error finding container 1cfa241aa991d310bdd1859a6d83f6d99cd978612698791052dcb485c15ab590: Status 404 returned error can't find the container with id 1cfa241aa991d310bdd1859a6d83f6d99cd978612698791052dcb485c15ab590 Oct 14 15:09:30 crc kubenswrapper[4860]: I1014 15:09:30.517322 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.117271 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11257192-cef9-4afe-89a3-4b9d60a9914c" path="/var/lib/kubelet/pods/11257192-cef9-4afe-89a3-4b9d60a9914c/volumes" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.117659 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fdb864-1fd8-4ca3-ab7a-3925084b3a21" path="/var/lib/kubelet/pods/76fdb864-1fd8-4ca3-ab7a-3925084b3a21/volumes" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.129258 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea67bc14-5a08-414d-9951-dbcb24dc99a4" path="/var/lib/kubelet/pods/ea67bc14-5a08-414d-9951-dbcb24dc99a4/volumes" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.129939 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.156848 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6466c9b897-b8tk5"] Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.175919 4860 generic.go:334] "Generic (PLEG): container finished" podID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerID="834dd2e7f80cc544d09752be20a68811aa81d0c5cda68707d1ed8c568133b827" exitCode=0 Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.176015 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" event={"ID":"6252f636-188e-4b89-8092-3ea73fe73fbe","Type":"ContainerDied","Data":"834dd2e7f80cc544d09752be20a68811aa81d0c5cda68707d1ed8c568133b827"} Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.209578 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b4d9d676f-t7pss"] Oct 14 15:09:31 crc kubenswrapper[4860]: E1014 15:09:31.209935 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea67bc14-5a08-414d-9951-dbcb24dc99a4" containerName="init" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.209949 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea67bc14-5a08-414d-9951-dbcb24dc99a4" containerName="init" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.210165 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea67bc14-5a08-414d-9951-dbcb24dc99a4" containerName="init" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.210989 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.213486 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dea06d4a-3963-4919-9671-ed906749cdd3","Type":"ContainerStarted","Data":"fbed3488bbe819ca72eae6a570005f36d65b0c71eebe0f55986b4f32a77fac7f"} Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.214795 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/531feef3-54a8-4a76-b87f-4fe76d0c7e46-logs\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.215022 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2m7\" (UniqueName: \"kubernetes.io/projected/531feef3-54a8-4a76-b87f-4fe76d0c7e46-kube-api-access-fb2m7\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.215066 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-scripts\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.215252 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/531feef3-54a8-4a76-b87f-4fe76d0c7e46-horizon-secret-key\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.215307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-config-data\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.222391 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a56ac53-1e28-4f84-a63c-373c1159ea14","Type":"ContainerStarted","Data":"1cfa241aa991d310bdd1859a6d83f6d99cd978612698791052dcb485c15ab590"} Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.270497 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b4d9d676f-t7pss"] Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.332837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/531feef3-54a8-4a76-b87f-4fe76d0c7e46-horizon-secret-key\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.332896 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-config-data\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.332972 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/531feef3-54a8-4a76-b87f-4fe76d0c7e46-logs\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.333134 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2m7\" (UniqueName: \"kubernetes.io/projected/531feef3-54a8-4a76-b87f-4fe76d0c7e46-kube-api-access-fb2m7\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.333154 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-scripts\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.333921 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-scripts\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.336827 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/531feef3-54a8-4a76-b87f-4fe76d0c7e46-logs\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.337897 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-config-data\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.348690 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.383012 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/531feef3-54a8-4a76-b87f-4fe76d0c7e46-horizon-secret-key\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.387921 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2m7\" (UniqueName: \"kubernetes.io/projected/531feef3-54a8-4a76-b87f-4fe76d0c7e46-kube-api-access-fb2m7\") pod \"horizon-6b4d9d676f-t7pss\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.424441 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:09:31 crc kubenswrapper[4860]: I1014 15:09:31.444315 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:09:32 crc kubenswrapper[4860]: I1014 15:09:32.216836 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b4d9d676f-t7pss"] Oct 14 15:09:32 crc kubenswrapper[4860]: I1014 15:09:32.286493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" event={"ID":"6252f636-188e-4b89-8092-3ea73fe73fbe","Type":"ContainerStarted","Data":"7d9ffd06c86853e24817c32040c554d316a0a24507afaa4edc1f5d1345f88ee3"} Oct 14 15:09:32 crc kubenswrapper[4860]: I1014 15:09:32.287280 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:32 crc kubenswrapper[4860]: W1014 15:09:32.296102 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod531feef3_54a8_4a76_b87f_4fe76d0c7e46.slice/crio-b014f8b9c9cf5c6a59765e0b37799814280f0732bf339f2a1858b8c08b4c18d8 WatchSource:0}: Error finding container b014f8b9c9cf5c6a59765e0b37799814280f0732bf339f2a1858b8c08b4c18d8: Status 404 returned error can't find the container with id b014f8b9c9cf5c6a59765e0b37799814280f0732bf339f2a1858b8c08b4c18d8 Oct 14 15:09:32 crc kubenswrapper[4860]: I1014 15:09:32.299422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a56ac53-1e28-4f84-a63c-373c1159ea14","Type":"ContainerStarted","Data":"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33"} Oct 14 15:09:32 crc kubenswrapper[4860]: I1014 15:09:32.337305 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" podStartSLOduration=5.337286089 podStartE2EDuration="5.337286089s" podCreationTimestamp="2025-10-14 15:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:09:32.31593954 +0000 UTC m=+1233.902722989" watchObservedRunningTime="2025-10-14 15:09:32.337286089 +0000 UTC m=+1233.924069538" Oct 14 15:09:33 crc kubenswrapper[4860]: I1014 15:09:33.334722 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9d676f-t7pss" event={"ID":"531feef3-54a8-4a76-b87f-4fe76d0c7e46","Type":"ContainerStarted","Data":"b014f8b9c9cf5c6a59765e0b37799814280f0732bf339f2a1858b8c08b4c18d8"} Oct 14 15:09:33 crc kubenswrapper[4860]: I1014 15:09:33.338085 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dea06d4a-3963-4919-9671-ed906749cdd3","Type":"ContainerStarted","Data":"1e1ee50eb6d13f49635132bd509fa859857719dbe6ed55a37b0d28a6f7db6b18"} Oct 14 15:09:33 crc kubenswrapper[4860]: I1014 15:09:33.396434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a56ac53-1e28-4f84-a63c-373c1159ea14","Type":"ContainerStarted","Data":"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179"} Oct 14 15:09:33 crc kubenswrapper[4860]: I1014 15:09:33.396558 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-log" containerID="cri-o://590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33" gracePeriod=30 Oct 14 15:09:33 crc kubenswrapper[4860]: I1014 15:09:33.397044 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-httpd" containerID="cri-o://e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179" gracePeriod=30 Oct 14 15:09:33 crc kubenswrapper[4860]: I1014 15:09:33.431406 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.431387315 podStartE2EDuration="6.431387315s" podCreationTimestamp="2025-10-14 15:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:09:33.425940262 +0000 UTC m=+1235.012723711" watchObservedRunningTime="2025-10-14 15:09:33.431387315 +0000 UTC m=+1235.018170764" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.140556 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254210 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-combined-ca-bundle\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhwjg\" (UniqueName: \"kubernetes.io/projected/4a56ac53-1e28-4f84-a63c-373c1159ea14-kube-api-access-fhwjg\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254294 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-httpd-run\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254344 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-config-data\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254362 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-scripts\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-logs\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.254419 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4a56ac53-1e28-4f84-a63c-373c1159ea14\" (UID: \"4a56ac53-1e28-4f84-a63c-373c1159ea14\") " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.255771 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.264254 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.270416 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.270679 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-logs" (OuterVolumeSpecName: "logs") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.283725 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-scripts" (OuterVolumeSpecName: "scripts") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.283882 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a56ac53-1e28-4f84-a63c-373c1159ea14-kube-api-access-fhwjg" (OuterVolumeSpecName: "kube-api-access-fhwjg") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "kube-api-access-fhwjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.300547 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.337173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-config-data" (OuterVolumeSpecName: "config-data") pod "4a56ac53-1e28-4f84-a63c-373c1159ea14" (UID: "4a56ac53-1e28-4f84-a63c-373c1159ea14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.368994 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.369039 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhwjg\" (UniqueName: \"kubernetes.io/projected/4a56ac53-1e28-4f84-a63c-373c1159ea14-kube-api-access-fhwjg\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.369050 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.369062 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a56ac53-1e28-4f84-a63c-373c1159ea14-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.369071 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a56ac53-1e28-4f84-a63c-373c1159ea14-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.369104 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.410251 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.430982 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dea06d4a-3963-4919-9671-ed906749cdd3","Type":"ContainerStarted","Data":"ef2c63b1fa780dadba665472bf9346a339401f5ae69dcb9201d7099dde38a51d"} Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.431145 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-log" containerID="cri-o://1e1ee50eb6d13f49635132bd509fa859857719dbe6ed55a37b0d28a6f7db6b18" gracePeriod=30 Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.431182 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-httpd" containerID="cri-o://ef2c63b1fa780dadba665472bf9346a339401f5ae69dcb9201d7099dde38a51d" gracePeriod=30 Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.435141 4860 generic.go:334] "Generic (PLEG): container finished" podID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerID="e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179" exitCode=143 Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.435178 4860 generic.go:334] "Generic (PLEG): container finished" podID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerID="590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33" exitCode=143 Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.435198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a56ac53-1e28-4f84-a63c-373c1159ea14","Type":"ContainerDied","Data":"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179"} Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.435412 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a56ac53-1e28-4f84-a63c-373c1159ea14","Type":"ContainerDied","Data":"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33"} Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.435427 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a56ac53-1e28-4f84-a63c-373c1159ea14","Type":"ContainerDied","Data":"1cfa241aa991d310bdd1859a6d83f6d99cd978612698791052dcb485c15ab590"} Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.435460 4860 scope.go:117] "RemoveContainer" containerID="e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.436692 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.462620 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.46259788 podStartE2EDuration="7.46259788s" podCreationTimestamp="2025-10-14 15:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:09:34.454766939 +0000 UTC m=+1236.041550388" watchObservedRunningTime="2025-10-14 15:09:34.46259788 +0000 UTC m=+1236.049381329" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.470288 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.493155 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.502066 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.521083 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:34 crc kubenswrapper[4860]: E1014 15:09:34.521695 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-httpd" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.521709 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-httpd" Oct 14 15:09:34 crc kubenswrapper[4860]: E1014 15:09:34.521773 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-log" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.521780 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-log" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.522137 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-httpd" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.522158 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" containerName="glance-log" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.526751 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.528657 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.530296 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.537595 4860 scope.go:117] "RemoveContainer" containerID="590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571544 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571582 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624wl\" (UniqueName: \"kubernetes.io/projected/147336fc-14ab-4c07-8da5-9dc29f2be3d4-kube-api-access-624wl\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571668 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571691 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571731 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.571764 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-logs\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.637270 4860 scope.go:117] "RemoveContainer" containerID="e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179" Oct 14 15:09:34 crc kubenswrapper[4860]: E1014 15:09:34.637701 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179\": container with ID starting with e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179 not found: ID does not exist" containerID="e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.637726 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179"} err="failed to get container status \"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179\": rpc error: code = NotFound desc = could not find container \"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179\": container with ID starting with e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179 not found: ID does not exist" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.637744 4860 scope.go:117] "RemoveContainer" containerID="590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33" Oct 14 15:09:34 crc kubenswrapper[4860]: E1014 15:09:34.638095 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33\": container with ID starting with 590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33 not found: ID does not exist" containerID="590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.638112 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33"} err="failed to get container status \"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33\": rpc error: code = NotFound desc = could not find container \"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33\": container with ID starting with 590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33 not found: ID does not exist" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.638126 4860 scope.go:117] "RemoveContainer" containerID="e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.638387 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179"} err="failed to get container status \"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179\": rpc error: code = NotFound desc = could not find container \"e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179\": container with ID starting with e6944c9e37d9f5e738d0874b48c7e9617537a3f0c6b743b4dac8a3b0d4a22179 not found: ID does not exist" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.638404 4860 scope.go:117] "RemoveContainer" containerID="590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.638621 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33"} err="failed to get container status \"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33\": rpc error: code = NotFound desc = could not find container \"590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33\": container with ID starting with 590f24e3fb805a9bf06f8804c91b65c3557b516577c0b9e9ed068196e42f5e33 not found: ID does not exist" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.672904 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.672955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.672982 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.673056 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.673092 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-logs\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.673135 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.673151 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-624wl\" (UniqueName: \"kubernetes.io/projected/147336fc-14ab-4c07-8da5-9dc29f2be3d4-kube-api-access-624wl\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.681551 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.683364 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.683445 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-logs\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.698045 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.698329 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.698368 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.701834 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-624wl\" (UniqueName: \"kubernetes.io/projected/147336fc-14ab-4c07-8da5-9dc29f2be3d4-kube-api-access-624wl\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.745782 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " pod="openstack/glance-default-external-api-0" Oct 14 15:09:34 crc kubenswrapper[4860]: I1014 15:09:34.863112 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.125899 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a56ac53-1e28-4f84-a63c-373c1159ea14" path="/var/lib/kubelet/pods/4a56ac53-1e28-4f84-a63c-373c1159ea14/volumes" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.484109 4860 generic.go:334] "Generic (PLEG): container finished" podID="dea06d4a-3963-4919-9671-ed906749cdd3" containerID="ef2c63b1fa780dadba665472bf9346a339401f5ae69dcb9201d7099dde38a51d" exitCode=0 Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.484141 4860 generic.go:334] "Generic (PLEG): container finished" podID="dea06d4a-3963-4919-9671-ed906749cdd3" containerID="1e1ee50eb6d13f49635132bd509fa859857719dbe6ed55a37b0d28a6f7db6b18" exitCode=143 Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.484200 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dea06d4a-3963-4919-9671-ed906749cdd3","Type":"ContainerDied","Data":"ef2c63b1fa780dadba665472bf9346a339401f5ae69dcb9201d7099dde38a51d"} Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.484225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dea06d4a-3963-4919-9671-ed906749cdd3","Type":"ContainerDied","Data":"1e1ee50eb6d13f49635132bd509fa859857719dbe6ed55a37b0d28a6f7db6b18"} Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.641062 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.644264 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703477 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-config-data\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703564 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-httpd-run\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703639 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncbm5\" (UniqueName: \"kubernetes.io/projected/dea06d4a-3963-4919-9671-ed906749cdd3-kube-api-access-ncbm5\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703689 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703755 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-scripts\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703787 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-logs\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.703813 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-combined-ca-bundle\") pod \"dea06d4a-3963-4919-9671-ed906749cdd3\" (UID: \"dea06d4a-3963-4919-9671-ed906749cdd3\") " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.704139 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.707568 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-logs" (OuterVolumeSpecName: "logs") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.711358 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.711710 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea06d4a-3963-4919-9671-ed906749cdd3-kube-api-access-ncbm5" (OuterVolumeSpecName: "kube-api-access-ncbm5") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "kube-api-access-ncbm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.712195 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-scripts" (OuterVolumeSpecName: "scripts") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.735217 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.767218 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-config-data" (OuterVolumeSpecName: "config-data") pod "dea06d4a-3963-4919-9671-ed906749cdd3" (UID: "dea06d4a-3963-4919-9671-ed906749cdd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805322 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805360 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncbm5\" (UniqueName: \"kubernetes.io/projected/dea06d4a-3963-4919-9671-ed906749cdd3-kube-api-access-ncbm5\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805407 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805417 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805426 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea06d4a-3963-4919-9671-ed906749cdd3-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805434 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.805444 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea06d4a-3963-4919-9671-ed906749cdd3-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.830255 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 14 15:09:35 crc kubenswrapper[4860]: I1014 15:09:35.906966 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.512451 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.512658 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dea06d4a-3963-4919-9671-ed906749cdd3","Type":"ContainerDied","Data":"fbed3488bbe819ca72eae6a570005f36d65b0c71eebe0f55986b4f32a77fac7f"} Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.513957 4860 scope.go:117] "RemoveContainer" containerID="ef2c63b1fa780dadba665472bf9346a339401f5ae69dcb9201d7099dde38a51d" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.518280 4860 generic.go:334] "Generic (PLEG): container finished" podID="7b8c3dcb-4c41-43bd-852a-ad86946b1124" containerID="88a76723e02d8de3fc034bee165c642e009b34660ef7316fa335fab79b9b9a10" exitCode=0 Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.518338 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w74kc" event={"ID":"7b8c3dcb-4c41-43bd-852a-ad86946b1124","Type":"ContainerDied","Data":"88a76723e02d8de3fc034bee165c642e009b34660ef7316fa335fab79b9b9a10"} Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.520364 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"147336fc-14ab-4c07-8da5-9dc29f2be3d4","Type":"ContainerStarted","Data":"e18fb086acbbfec2055b07287f27240c308e732af7f99d0dc0537c6cae629b69"} Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.579890 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.600088 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.605243 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:36 crc kubenswrapper[4860]: E1014 15:09:36.605656 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-log" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.605669 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-log" Oct 14 15:09:36 crc kubenswrapper[4860]: E1014 15:09:36.605697 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-httpd" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.605706 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-httpd" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.605897 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-httpd" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.605917 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" containerName="glance-log" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.606810 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.609591 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.615994 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.618868 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.618899 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.618931 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.619017 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.619066 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.619092 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.619125 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5gt\" (UniqueName: \"kubernetes.io/projected/f2262533-70e8-4bb7-80b4-9576b13ab2a5-kube-api-access-bd5gt\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720041 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720087 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720121 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720190 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720238 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720271 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.720317 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5gt\" (UniqueName: \"kubernetes.io/projected/f2262533-70e8-4bb7-80b4-9576b13ab2a5-kube-api-access-bd5gt\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.729523 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.732253 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.732550 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.732802 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.733012 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.739375 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5gt\" (UniqueName: \"kubernetes.io/projected/f2262533-70e8-4bb7-80b4-9576b13ab2a5-kube-api-access-bd5gt\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.751352 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.815290 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:09:36 crc kubenswrapper[4860]: I1014 15:09:36.928913 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:09:37 crc kubenswrapper[4860]: I1014 15:09:37.082289 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea06d4a-3963-4919-9671-ed906749cdd3" path="/var/lib/kubelet/pods/dea06d4a-3963-4919-9671-ed906749cdd3/volumes" Oct 14 15:09:37 crc kubenswrapper[4860]: I1014 15:09:37.533700 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"147336fc-14ab-4c07-8da5-9dc29f2be3d4","Type":"ContainerStarted","Data":"c66d4ebe14789fc6534efb70e5d9d3a4f0baf278dcd10a2040282b04c2aef229"} Oct 14 15:09:37 crc kubenswrapper[4860]: I1014 15:09:37.813221 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:09:37 crc kubenswrapper[4860]: I1014 15:09:37.884549 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9c68c"] Oct 14 15:09:37 crc kubenswrapper[4860]: I1014 15:09:37.884769 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" containerID="cri-o://1031868bea866a6c4c6c7e94d889d9ef722fef5da8df51ef1f86216bb5c64fec" gracePeriod=10 Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.270503 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84b975bf87-4qg2x"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.314588 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dd7969c76-f8cq5"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.322638 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.328743 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.346638 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd7969c76-f8cq5"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.394684 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.425616 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b4d9d676f-t7pss"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474636 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59fdcc0-928b-485d-a66b-450a1d1d76f4-logs\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474738 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-config-data\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-scripts\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474852 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-combined-ca-bundle\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474885 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slscq\" (UniqueName: \"kubernetes.io/projected/e59fdcc0-928b-485d-a66b-450a1d1d76f4-kube-api-access-slscq\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474907 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-tls-certs\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.474924 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-secret-key\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.489018 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8795558b4-cgsrj"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.490585 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.523839 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8795558b4-cgsrj"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.535449 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.552127 4860 generic.go:334] "Generic (PLEG): container finished" podID="153ddd28-cece-4e22-956e-421b65491e15" containerID="1031868bea866a6c4c6c7e94d889d9ef722fef5da8df51ef1f86216bb5c64fec" exitCode=0 Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.553693 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9c68c" event={"ID":"153ddd28-cece-4e22-956e-421b65491e15","Type":"ContainerDied","Data":"1031868bea866a6c4c6c7e94d889d9ef722fef5da8df51ef1f86216bb5c64fec"} Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577331 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-horizon-secret-key\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577366 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50439f-28b5-4b76-9afb-b705c4037f8d-logs\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577402 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-horizon-tls-certs\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577447 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-scripts\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577521 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-combined-ca-bundle\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577541 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4mxk\" (UniqueName: \"kubernetes.io/projected/ba50439f-28b5-4b76-9afb-b705c4037f8d-kube-api-access-g4mxk\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577568 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slscq\" (UniqueName: \"kubernetes.io/projected/e59fdcc0-928b-485d-a66b-450a1d1d76f4-kube-api-access-slscq\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577589 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-tls-certs\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577608 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-secret-key\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577629 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-combined-ca-bundle\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577664 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59fdcc0-928b-485d-a66b-450a1d1d76f4-logs\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577694 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba50439f-28b5-4b76-9afb-b705c4037f8d-config-data\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577738 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba50439f-28b5-4b76-9afb-b705c4037f8d-scripts\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.577754 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-config-data\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.579301 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-config-data\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.579807 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-scripts\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.581754 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59fdcc0-928b-485d-a66b-450a1d1d76f4-logs\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.593188 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-secret-key\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.593859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-tls-certs\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.603189 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-combined-ca-bundle\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.613290 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slscq\" (UniqueName: \"kubernetes.io/projected/e59fdcc0-928b-485d-a66b-450a1d1d76f4-kube-api-access-slscq\") pod \"horizon-7dd7969c76-f8cq5\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.639556 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680105 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-horizon-tls-certs\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4mxk\" (UniqueName: \"kubernetes.io/projected/ba50439f-28b5-4b76-9afb-b705c4037f8d-kube-api-access-g4mxk\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680254 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-combined-ca-bundle\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680314 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba50439f-28b5-4b76-9afb-b705c4037f8d-config-data\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680363 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba50439f-28b5-4b76-9afb-b705c4037f8d-scripts\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-horizon-secret-key\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680428 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50439f-28b5-4b76-9afb-b705c4037f8d-logs\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.680951 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50439f-28b5-4b76-9afb-b705c4037f8d-logs\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.682069 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba50439f-28b5-4b76-9afb-b705c4037f8d-scripts\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.685482 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-horizon-tls-certs\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.688301 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-combined-ca-bundle\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.688924 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba50439f-28b5-4b76-9afb-b705c4037f8d-config-data\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.698678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4mxk\" (UniqueName: \"kubernetes.io/projected/ba50439f-28b5-4b76-9afb-b705c4037f8d-kube-api-access-g4mxk\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.708518 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba50439f-28b5-4b76-9afb-b705c4037f8d-horizon-secret-key\") pod \"horizon-8795558b4-cgsrj\" (UID: \"ba50439f-28b5-4b76-9afb-b705c4037f8d\") " pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.817279 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:09:38 crc kubenswrapper[4860]: I1014 15:09:38.923603 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Oct 14 15:09:43 crc kubenswrapper[4860]: I1014 15:09:43.923103 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.966153 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.995637 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-credential-keys\") pod \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.995718 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-combined-ca-bundle\") pod \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.995780 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b66cb\" (UniqueName: \"kubernetes.io/projected/7b8c3dcb-4c41-43bd-852a-ad86946b1124-kube-api-access-b66cb\") pod \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.995862 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-config-data\") pod \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.995916 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-scripts\") pod \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " Oct 14 15:09:47 crc kubenswrapper[4860]: I1014 15:09:47.995936 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-fernet-keys\") pod \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\" (UID: \"7b8c3dcb-4c41-43bd-852a-ad86946b1124\") " Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.007131 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7b8c3dcb-4c41-43bd-852a-ad86946b1124" (UID: "7b8c3dcb-4c41-43bd-852a-ad86946b1124"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.013634 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-scripts" (OuterVolumeSpecName: "scripts") pod "7b8c3dcb-4c41-43bd-852a-ad86946b1124" (UID: "7b8c3dcb-4c41-43bd-852a-ad86946b1124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.020797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8c3dcb-4c41-43bd-852a-ad86946b1124-kube-api-access-b66cb" (OuterVolumeSpecName: "kube-api-access-b66cb") pod "7b8c3dcb-4c41-43bd-852a-ad86946b1124" (UID: "7b8c3dcb-4c41-43bd-852a-ad86946b1124"). InnerVolumeSpecName "kube-api-access-b66cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.071980 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-config-data" (OuterVolumeSpecName: "config-data") pod "7b8c3dcb-4c41-43bd-852a-ad86946b1124" (UID: "7b8c3dcb-4c41-43bd-852a-ad86946b1124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.071991 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7b8c3dcb-4c41-43bd-852a-ad86946b1124" (UID: "7b8c3dcb-4c41-43bd-852a-ad86946b1124"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.075185 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8c3dcb-4c41-43bd-852a-ad86946b1124" (UID: "7b8c3dcb-4c41-43bd-852a-ad86946b1124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.098415 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.098452 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.098461 4860 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.098470 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.098479 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b66cb\" (UniqueName: \"kubernetes.io/projected/7b8c3dcb-4c41-43bd-852a-ad86946b1124-kube-api-access-b66cb\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.098487 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c3dcb-4c41-43bd-852a-ad86946b1124-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.635765 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w74kc" event={"ID":"7b8c3dcb-4c41-43bd-852a-ad86946b1124","Type":"ContainerDied","Data":"189e8b06495de739e3fedb0c7ff034eedd2dfd61cb8f2989a0a6541e47e8dcf2"} Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.635811 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189e8b06495de739e3fedb0c7ff034eedd2dfd61cb8f2989a0a6541e47e8dcf2" Oct 14 15:09:48 crc kubenswrapper[4860]: I1014 15:09:48.635812 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w74kc" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.054102 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w74kc"] Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.085694 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w74kc"] Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.152428 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wkzht"] Oct 14 15:09:49 crc kubenswrapper[4860]: E1014 15:09:49.152751 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8c3dcb-4c41-43bd-852a-ad86946b1124" containerName="keystone-bootstrap" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.152769 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8c3dcb-4c41-43bd-852a-ad86946b1124" containerName="keystone-bootstrap" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.152939 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8c3dcb-4c41-43bd-852a-ad86946b1124" containerName="keystone-bootstrap" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.153495 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.159385 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xsgm5" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.159683 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.162162 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.162240 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.170250 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wkzht"] Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.216427 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-scripts\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.318697 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-scripts\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.318885 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mph5q\" (UniqueName: \"kubernetes.io/projected/8616715a-5ecc-4bec-8e55-14626927cce5-kube-api-access-mph5q\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.318931 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-fernet-keys\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.318994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-config-data\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.319013 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-credential-keys\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.319061 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-combined-ca-bundle\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.322575 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-scripts\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.421221 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-fernet-keys\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.421313 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-config-data\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.421343 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-credential-keys\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.421384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-combined-ca-bundle\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.422554 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mph5q\" (UniqueName: \"kubernetes.io/projected/8616715a-5ecc-4bec-8e55-14626927cce5-kube-api-access-mph5q\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.425440 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-combined-ca-bundle\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.425725 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-config-data\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.429450 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-credential-keys\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.431192 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-fernet-keys\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.438585 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mph5q\" (UniqueName: \"kubernetes.io/projected/8616715a-5ecc-4bec-8e55-14626927cce5-kube-api-access-mph5q\") pod \"keystone-bootstrap-wkzht\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: I1014 15:09:49.473701 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:09:49 crc kubenswrapper[4860]: E1014 15:09:49.846090 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Oct 14 15:09:49 crc kubenswrapper[4860]: E1014 15:09:49.846242 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vs4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-dbsq9_openstack(3324c4e1-abc6-473d-8d14-28d41a4e27a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:09:49 crc kubenswrapper[4860]: E1014 15:09:49.847574 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-dbsq9" podUID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" Oct 14 15:09:50 crc kubenswrapper[4860]: E1014 15:09:50.655980 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-dbsq9" podUID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" Oct 14 15:09:51 crc kubenswrapper[4860]: I1014 15:09:51.072906 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8c3dcb-4c41-43bd-852a-ad86946b1124" path="/var/lib/kubelet/pods/7b8c3dcb-4c41-43bd-852a-ad86946b1124/volumes" Oct 14 15:09:53 crc kubenswrapper[4860]: I1014 15:09:53.923361 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Oct 14 15:09:53 crc kubenswrapper[4860]: I1014 15:09:53.924093 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:09:57 crc kubenswrapper[4860]: E1014 15:09:57.242913 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 15:09:57 crc kubenswrapper[4860]: E1014 15:09:57.243452 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n84h646h54h7h665h54bhf5h59h5f6h98hb7h5c7hbdh5ddhf4h68dh5fbhd7h588h556h599hdh56dhb9h56fh55ch599h5c4h5b9h589h9fh565q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpt5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-84b975bf87-4qg2x_openstack(7b08c17e-22a5-4238-a9df-3efc1ae5f335): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:09:57 crc kubenswrapper[4860]: E1014 15:09:57.248012 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-84b975bf87-4qg2x" podUID="7b08c17e-22a5-4238-a9df-3efc1ae5f335" Oct 14 15:09:57 crc kubenswrapper[4860]: E1014 15:09:57.362253 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 15:09:57 crc kubenswrapper[4860]: E1014 15:09:57.362443 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n678h59h54ch647h9fh556h6chb9h67fh59dh556hf8h68dh66fh547h569h7bh685h78h588h58h575hcch55ch5fbh5b7h5b6h79h54fh5c8h595h688q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs7gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6466c9b897-b8tk5_openstack(d50105c7-28e1-401b-8447-715e9749be1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:09:57 crc kubenswrapper[4860]: E1014 15:09:57.365518 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6466c9b897-b8tk5" podUID="d50105c7-28e1-401b-8447-715e9749be1a" Oct 14 15:09:58 crc kubenswrapper[4860]: I1014 15:09:58.924477 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Oct 14 15:09:59 crc kubenswrapper[4860]: I1014 15:09:59.245975 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:09:59 crc kubenswrapper[4860]: I1014 15:09:59.246058 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:10:03 crc kubenswrapper[4860]: I1014 15:10:03.925371 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Oct 14 15:10:07 crc kubenswrapper[4860]: I1014 15:10:07.872796 4860 scope.go:117] "RemoveContainer" containerID="1e1ee50eb6d13f49635132bd509fa859857719dbe6ed55a37b0d28a6f7db6b18" Oct 14 15:10:07 crc kubenswrapper[4860]: I1014 15:10:07.997841 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.187074 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-config\") pod \"153ddd28-cece-4e22-956e-421b65491e15\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.187178 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnfr9\" (UniqueName: \"kubernetes.io/projected/153ddd28-cece-4e22-956e-421b65491e15-kube-api-access-nnfr9\") pod \"153ddd28-cece-4e22-956e-421b65491e15\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.187209 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-nb\") pod \"153ddd28-cece-4e22-956e-421b65491e15\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.187498 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-dns-svc\") pod \"153ddd28-cece-4e22-956e-421b65491e15\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.187550 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-sb\") pod \"153ddd28-cece-4e22-956e-421b65491e15\" (UID: \"153ddd28-cece-4e22-956e-421b65491e15\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.201167 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153ddd28-cece-4e22-956e-421b65491e15-kube-api-access-nnfr9" (OuterVolumeSpecName: "kube-api-access-nnfr9") pod "153ddd28-cece-4e22-956e-421b65491e15" (UID: "153ddd28-cece-4e22-956e-421b65491e15"). InnerVolumeSpecName "kube-api-access-nnfr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.244071 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-config" (OuterVolumeSpecName: "config") pod "153ddd28-cece-4e22-956e-421b65491e15" (UID: "153ddd28-cece-4e22-956e-421b65491e15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.248640 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "153ddd28-cece-4e22-956e-421b65491e15" (UID: "153ddd28-cece-4e22-956e-421b65491e15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.255741 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "153ddd28-cece-4e22-956e-421b65491e15" (UID: "153ddd28-cece-4e22-956e-421b65491e15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.265577 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "153ddd28-cece-4e22-956e-421b65491e15" (UID: "153ddd28-cece-4e22-956e-421b65491e15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.290873 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.290902 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnfr9\" (UniqueName: \"kubernetes.io/projected/153ddd28-cece-4e22-956e-421b65491e15-kube-api-access-nnfr9\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.290913 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.290923 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.290931 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153ddd28-cece-4e22-956e-421b65491e15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: E1014 15:10:08.703182 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Oct 14 15:10:08 crc kubenswrapper[4860]: E1014 15:10:08.703326 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79rf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-x2247_openstack(f0a3bc02-1357-4751-9496-a41526515867): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:10:08 crc kubenswrapper[4860]: E1014 15:10:08.704526 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-x2247" podUID="f0a3bc02-1357-4751-9496-a41526515867" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.723865 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.729314 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.819624 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6466c9b897-b8tk5" event={"ID":"d50105c7-28e1-401b-8447-715e9749be1a","Type":"ContainerDied","Data":"d945d8021065378a88f302b824ee79927b240d1cbb37a5762bfe6523642f398b"} Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.819707 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6466c9b897-b8tk5" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.825512 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b975bf87-4qg2x" event={"ID":"7b08c17e-22a5-4238-a9df-3efc1ae5f335","Type":"ContainerDied","Data":"21a500bc89f0a1050debff8437ec84ee5fda6b79df6ba9ab1725c727f76968a0"} Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.825597 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b975bf87-4qg2x" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.829150 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9c68c" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.830200 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9c68c" event={"ID":"153ddd28-cece-4e22-956e-421b65491e15","Type":"ContainerDied","Data":"1dec96f7d8d263a6c7d55f45f55c2b4a2486c82076ea7e0b3e8ebadd2e2e947f"} Oct 14 15:10:08 crc kubenswrapper[4860]: E1014 15:10:08.832917 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-x2247" podUID="f0a3bc02-1357-4751-9496-a41526515867" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.880868 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9c68c"] Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.888943 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9c68c"] Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.898635 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-config-data\") pod \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.898733 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-scripts\") pod \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.898889 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50105c7-28e1-401b-8447-715e9749be1a-logs\") pod \"d50105c7-28e1-401b-8447-715e9749be1a\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899126 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d50105c7-28e1-401b-8447-715e9749be1a-logs" (OuterVolumeSpecName: "logs") pod "d50105c7-28e1-401b-8447-715e9749be1a" (UID: "d50105c7-28e1-401b-8447-715e9749be1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899198 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-scripts" (OuterVolumeSpecName: "scripts") pod "7b08c17e-22a5-4238-a9df-3efc1ae5f335" (UID: "7b08c17e-22a5-4238-a9df-3efc1ae5f335"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899370 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-config-data\") pod \"d50105c7-28e1-401b-8447-715e9749be1a\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899416 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b08c17e-22a5-4238-a9df-3efc1ae5f335-horizon-secret-key\") pod \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899473 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs7gt\" (UniqueName: \"kubernetes.io/projected/d50105c7-28e1-401b-8447-715e9749be1a-kube-api-access-qs7gt\") pod \"d50105c7-28e1-401b-8447-715e9749be1a\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899502 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpt5f\" (UniqueName: \"kubernetes.io/projected/7b08c17e-22a5-4238-a9df-3efc1ae5f335-kube-api-access-tpt5f\") pod \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d50105c7-28e1-401b-8447-715e9749be1a-horizon-secret-key\") pod \"d50105c7-28e1-401b-8447-715e9749be1a\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899549 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-scripts\") pod \"d50105c7-28e1-401b-8447-715e9749be1a\" (UID: \"d50105c7-28e1-401b-8447-715e9749be1a\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899577 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b08c17e-22a5-4238-a9df-3efc1ae5f335-logs\") pod \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\" (UID: \"7b08c17e-22a5-4238-a9df-3efc1ae5f335\") " Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899912 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50105c7-28e1-401b-8447-715e9749be1a-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899934 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.899902 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-config-data" (OuterVolumeSpecName: "config-data") pod "7b08c17e-22a5-4238-a9df-3efc1ae5f335" (UID: "7b08c17e-22a5-4238-a9df-3efc1ae5f335"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.900647 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b08c17e-22a5-4238-a9df-3efc1ae5f335-logs" (OuterVolumeSpecName: "logs") pod "7b08c17e-22a5-4238-a9df-3efc1ae5f335" (UID: "7b08c17e-22a5-4238-a9df-3efc1ae5f335"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.901702 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-scripts" (OuterVolumeSpecName: "scripts") pod "d50105c7-28e1-401b-8447-715e9749be1a" (UID: "d50105c7-28e1-401b-8447-715e9749be1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.901908 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-config-data" (OuterVolumeSpecName: "config-data") pod "d50105c7-28e1-401b-8447-715e9749be1a" (UID: "d50105c7-28e1-401b-8447-715e9749be1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.905294 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50105c7-28e1-401b-8447-715e9749be1a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d50105c7-28e1-401b-8447-715e9749be1a" (UID: "d50105c7-28e1-401b-8447-715e9749be1a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.905332 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b08c17e-22a5-4238-a9df-3efc1ae5f335-kube-api-access-tpt5f" (OuterVolumeSpecName: "kube-api-access-tpt5f") pod "7b08c17e-22a5-4238-a9df-3efc1ae5f335" (UID: "7b08c17e-22a5-4238-a9df-3efc1ae5f335"). InnerVolumeSpecName "kube-api-access-tpt5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.905356 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50105c7-28e1-401b-8447-715e9749be1a-kube-api-access-qs7gt" (OuterVolumeSpecName: "kube-api-access-qs7gt") pod "d50105c7-28e1-401b-8447-715e9749be1a" (UID: "d50105c7-28e1-401b-8447-715e9749be1a"). InnerVolumeSpecName "kube-api-access-qs7gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.923569 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b08c17e-22a5-4238-a9df-3efc1ae5f335-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7b08c17e-22a5-4238-a9df-3efc1ae5f335" (UID: "7b08c17e-22a5-4238-a9df-3efc1ae5f335"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:08 crc kubenswrapper[4860]: I1014 15:10:08.926452 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9c68c" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.009544 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs7gt\" (UniqueName: \"kubernetes.io/projected/d50105c7-28e1-401b-8447-715e9749be1a-kube-api-access-qs7gt\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.009699 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpt5f\" (UniqueName: \"kubernetes.io/projected/7b08c17e-22a5-4238-a9df-3efc1ae5f335-kube-api-access-tpt5f\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.009784 4860 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d50105c7-28e1-401b-8447-715e9749be1a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.009884 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.009949 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b08c17e-22a5-4238-a9df-3efc1ae5f335-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.010087 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b08c17e-22a5-4238-a9df-3efc1ae5f335-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.010165 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d50105c7-28e1-401b-8447-715e9749be1a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.010244 4860 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b08c17e-22a5-4238-a9df-3efc1ae5f335-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.072373 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153ddd28-cece-4e22-956e-421b65491e15" path="/var/lib/kubelet/pods/153ddd28-cece-4e22-956e-421b65491e15/volumes" Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.168015 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6466c9b897-b8tk5"] Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.174673 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6466c9b897-b8tk5"] Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.230566 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84b975bf87-4qg2x"] Oct 14 15:10:09 crc kubenswrapper[4860]: I1014 15:10:09.240447 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84b975bf87-4qg2x"] Oct 14 15:10:10 crc kubenswrapper[4860]: E1014 15:10:10.080578 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 14 15:10:10 crc kubenswrapper[4860]: E1014 15:10:10.081163 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfcq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-grpb9_openstack(ca080412-b618-4293-a06d-e0d9a774d36b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:10:10 crc kubenswrapper[4860]: E1014 15:10:10.083704 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-grpb9" podUID="ca080412-b618-4293-a06d-e0d9a774d36b" Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.242551 4860 scope.go:117] "RemoveContainer" containerID="1031868bea866a6c4c6c7e94d889d9ef722fef5da8df51ef1f86216bb5c64fec" Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.318440 4860 scope.go:117] "RemoveContainer" containerID="f6b7078ecd48d961d854c2df6abcd6a7e258866f5315174634be1b689338bf81" Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.592399 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd7969c76-f8cq5"] Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.612750 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wkzht"] Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.686731 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:10:10 crc kubenswrapper[4860]: W1014 15:10:10.703240 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2262533_70e8_4bb7_80b4_9576b13ab2a5.slice/crio-6f466709da421e53eca80f334497d37b0aad7ada049b016cac42377a27e17a4b WatchSource:0}: Error finding container 6f466709da421e53eca80f334497d37b0aad7ada049b016cac42377a27e17a4b: Status 404 returned error can't find the container with id 6f466709da421e53eca80f334497d37b0aad7ada049b016cac42377a27e17a4b Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.734072 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8795558b4-cgsrj"] Oct 14 15:10:10 crc kubenswrapper[4860]: W1014 15:10:10.760683 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba50439f_28b5_4b76_9afb_b705c4037f8d.slice/crio-fe7912c6642c4eafc3591b75e77ef7b26779c68291742d8eda4da885adc64220 WatchSource:0}: Error finding container fe7912c6642c4eafc3591b75e77ef7b26779c68291742d8eda4da885adc64220: Status 404 returned error can't find the container with id fe7912c6642c4eafc3591b75e77ef7b26779c68291742d8eda4da885adc64220 Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.858626 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8795558b4-cgsrj" event={"ID":"ba50439f-28b5-4b76-9afb-b705c4037f8d","Type":"ContainerStarted","Data":"fe7912c6642c4eafc3591b75e77ef7b26779c68291742d8eda4da885adc64220"} Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.876928 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wkzht" event={"ID":"8616715a-5ecc-4bec-8e55-14626927cce5","Type":"ContainerStarted","Data":"70a89835a10582ee7d529019bf0feb24786cfe1cc7e4b22db749c51654191b99"} Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.878295 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbsq9" event={"ID":"3324c4e1-abc6-473d-8d14-28d41a4e27a8","Type":"ContainerStarted","Data":"e8f98dd80c026cf6fee32e32aa25db1f319ad8c9ece42a632eccf1b99c7e00c5"} Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.893791 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2262533-70e8-4bb7-80b4-9576b13ab2a5","Type":"ContainerStarted","Data":"6f466709da421e53eca80f334497d37b0aad7ada049b016cac42377a27e17a4b"} Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.897352 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9d676f-t7pss" event={"ID":"531feef3-54a8-4a76-b87f-4fe76d0c7e46","Type":"ContainerStarted","Data":"62c06d70cda9431e2f03e4c3f2a0b5a526287c6d550a4f0943cba87bebdf50c8"} Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.906844 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dbsq9" podStartSLOduration=3.80158364 podStartE2EDuration="44.90682777s" podCreationTimestamp="2025-10-14 15:09:26 +0000 UTC" firstStartedPulling="2025-10-14 15:09:29.227136367 +0000 UTC m=+1230.813919806" lastFinishedPulling="2025-10-14 15:10:10.332380487 +0000 UTC m=+1271.919163936" observedRunningTime="2025-10-14 15:10:10.895713679 +0000 UTC m=+1272.482497148" watchObservedRunningTime="2025-10-14 15:10:10.90682777 +0000 UTC m=+1272.493611219" Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.913831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerStarted","Data":"b7c9f56039bb2f2a71244fb435ca7d7bffc834ba209822873fe2f88dcc9cb5f7"} Oct 14 15:10:10 crc kubenswrapper[4860]: I1014 15:10:10.915795 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b84e0757-6fba-44cd-a37d-0e7c06eab0e4","Type":"ContainerStarted","Data":"4062f398bce61d7246ad507365f65e675b877bd7bde754da04411c7405d59083"} Oct 14 15:10:10 crc kubenswrapper[4860]: E1014 15:10:10.936415 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-grpb9" podUID="ca080412-b618-4293-a06d-e0d9a774d36b" Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.074308 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b08c17e-22a5-4238-a9df-3efc1ae5f335" path="/var/lib/kubelet/pods/7b08c17e-22a5-4238-a9df-3efc1ae5f335/volumes" Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.074880 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d50105c7-28e1-401b-8447-715e9749be1a" path="/var/lib/kubelet/pods/d50105c7-28e1-401b-8447-715e9749be1a/volumes" Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.965331 4860 generic.go:334] "Generic (PLEG): container finished" podID="c63dca02-9db5-41e7-90a0-0c19bd729242" containerID="954bc4d1818bf622ee8a06144a3b48f2323a934fd95d9db7376cc47b6cd2988a" exitCode=0 Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.965487 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dhd74" event={"ID":"c63dca02-9db5-41e7-90a0-0c19bd729242","Type":"ContainerDied","Data":"954bc4d1818bf622ee8a06144a3b48f2323a934fd95d9db7376cc47b6cd2988a"} Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.970195 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wkzht" event={"ID":"8616715a-5ecc-4bec-8e55-14626927cce5","Type":"ContainerStarted","Data":"6469106c14d5090a665c2bbd390f714e5f630ed44a6b1e2a12bb59c850325ed6"} Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.983173 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2262533-70e8-4bb7-80b4-9576b13ab2a5","Type":"ContainerStarted","Data":"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e"} Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.988143 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b4d9d676f-t7pss" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon-log" containerID="cri-o://62c06d70cda9431e2f03e4c3f2a0b5a526287c6d550a4f0943cba87bebdf50c8" gracePeriod=30 Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.988378 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9d676f-t7pss" event={"ID":"531feef3-54a8-4a76-b87f-4fe76d0c7e46","Type":"ContainerStarted","Data":"c1523cd43cc6372f8bf1d0026cfba9e4a0296bb3f3015fadf98dba296e94fbbc"} Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.988422 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b4d9d676f-t7pss" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon" containerID="cri-o://c1523cd43cc6372f8bf1d0026cfba9e4a0296bb3f3015fadf98dba296e94fbbc" gracePeriod=30 Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.994623 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8795558b4-cgsrj" event={"ID":"ba50439f-28b5-4b76-9afb-b705c4037f8d","Type":"ContainerStarted","Data":"77384f8c762ca369199fe7f2734dfaaa8f59ec6ad97c1602f4bac3fd00f71d13"} Oct 14 15:10:11 crc kubenswrapper[4860]: I1014 15:10:11.994656 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8795558b4-cgsrj" event={"ID":"ba50439f-28b5-4b76-9afb-b705c4037f8d","Type":"ContainerStarted","Data":"2b671af9dd363edb12e7fcc13d309058dfb630cc5d9e41205038d218ed74a13b"} Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.003509 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerStarted","Data":"c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c"} Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.003788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerStarted","Data":"48c829aeecd60e8eb72c1f7f8f0dd773866393ac607409fd129497c22dfd7dfc"} Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.014691 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wkzht" podStartSLOduration=23.014674751 podStartE2EDuration="23.014674751s" podCreationTimestamp="2025-10-14 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:12.008331006 +0000 UTC m=+1273.595114455" watchObservedRunningTime="2025-10-14 15:10:12.014674751 +0000 UTC m=+1273.601458200" Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.018754 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"147336fc-14ab-4c07-8da5-9dc29f2be3d4","Type":"ContainerStarted","Data":"0dc4b64bad4b46a97e43724cbf2c515ad1df95f59be4251e69f7f4a7fe455462"} Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.018988 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-log" containerID="cri-o://c66d4ebe14789fc6534efb70e5d9d3a4f0baf278dcd10a2040282b04c2aef229" gracePeriod=30 Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.019230 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-httpd" containerID="cri-o://0dc4b64bad4b46a97e43724cbf2c515ad1df95f59be4251e69f7f4a7fe455462" gracePeriod=30 Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.032482 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b4d9d676f-t7pss" podStartSLOduration=3.284701309 podStartE2EDuration="41.032464454s" podCreationTimestamp="2025-10-14 15:09:31 +0000 UTC" firstStartedPulling="2025-10-14 15:09:32.29955272 +0000 UTC m=+1233.886336169" lastFinishedPulling="2025-10-14 15:10:10.047315865 +0000 UTC m=+1271.634099314" observedRunningTime="2025-10-14 15:10:12.031417979 +0000 UTC m=+1273.618201428" watchObservedRunningTime="2025-10-14 15:10:12.032464454 +0000 UTC m=+1273.619247903" Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.080374 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7dd7969c76-f8cq5" podStartSLOduration=34.0803519 podStartE2EDuration="34.0803519s" podCreationTimestamp="2025-10-14 15:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:12.068374788 +0000 UTC m=+1273.655158227" watchObservedRunningTime="2025-10-14 15:10:12.0803519 +0000 UTC m=+1273.667135349" Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.092919 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8795558b4-cgsrj" podStartSLOduration=34.092893036 podStartE2EDuration="34.092893036s" podCreationTimestamp="2025-10-14 15:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:12.088754415 +0000 UTC m=+1273.675537884" watchObservedRunningTime="2025-10-14 15:10:12.092893036 +0000 UTC m=+1273.679676485" Oct 14 15:10:12 crc kubenswrapper[4860]: I1014 15:10:12.122231 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=38.12221708 podStartE2EDuration="38.12221708s" podCreationTimestamp="2025-10-14 15:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:12.116744787 +0000 UTC m=+1273.703528236" watchObservedRunningTime="2025-10-14 15:10:12.12221708 +0000 UTC m=+1273.709000529" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.027667 4860 generic.go:334] "Generic (PLEG): container finished" podID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerID="0dc4b64bad4b46a97e43724cbf2c515ad1df95f59be4251e69f7f4a7fe455462" exitCode=0 Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.028043 4860 generic.go:334] "Generic (PLEG): container finished" podID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerID="c66d4ebe14789fc6534efb70e5d9d3a4f0baf278dcd10a2040282b04c2aef229" exitCode=143 Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.027701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"147336fc-14ab-4c07-8da5-9dc29f2be3d4","Type":"ContainerDied","Data":"0dc4b64bad4b46a97e43724cbf2c515ad1df95f59be4251e69f7f4a7fe455462"} Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.028148 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"147336fc-14ab-4c07-8da5-9dc29f2be3d4","Type":"ContainerDied","Data":"c66d4ebe14789fc6534efb70e5d9d3a4f0baf278dcd10a2040282b04c2aef229"} Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.030163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2262533-70e8-4bb7-80b4-9576b13ab2a5","Type":"ContainerStarted","Data":"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92"} Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.030198 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-log" containerID="cri-o://53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e" gracePeriod=30 Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.030282 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-httpd" containerID="cri-o://ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92" gracePeriod=30 Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.192471 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.223231 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=37.223214535 podStartE2EDuration="37.223214535s" podCreationTimestamp="2025-10-14 15:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:13.063769181 +0000 UTC m=+1274.650552650" watchObservedRunningTime="2025-10-14 15:10:13.223214535 +0000 UTC m=+1274.809997984" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.311899 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-httpd-run\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.312016 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-logs\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.312076 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-624wl\" (UniqueName: \"kubernetes.io/projected/147336fc-14ab-4c07-8da5-9dc29f2be3d4-kube-api-access-624wl\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.312355 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-scripts\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.312371 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-config-data\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.312397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.312471 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-combined-ca-bundle\") pod \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\" (UID: \"147336fc-14ab-4c07-8da5-9dc29f2be3d4\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.313364 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-logs" (OuterVolumeSpecName: "logs") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.313581 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.313678 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.313693 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/147336fc-14ab-4c07-8da5-9dc29f2be3d4-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.319582 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-scripts" (OuterVolumeSpecName: "scripts") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.321922 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147336fc-14ab-4c07-8da5-9dc29f2be3d4-kube-api-access-624wl" (OuterVolumeSpecName: "kube-api-access-624wl") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "kube-api-access-624wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.323190 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.339580 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dhd74" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.367651 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.405499 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-config-data" (OuterVolumeSpecName: "config-data") pod "147336fc-14ab-4c07-8da5-9dc29f2be3d4" (UID: "147336fc-14ab-4c07-8da5-9dc29f2be3d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.415744 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-624wl\" (UniqueName: \"kubernetes.io/projected/147336fc-14ab-4c07-8da5-9dc29f2be3d4-kube-api-access-624wl\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.415779 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.415790 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.415821 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.415831 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147336fc-14ab-4c07-8da5-9dc29f2be3d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.445616 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.516565 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-combined-ca-bundle\") pod \"c63dca02-9db5-41e7-90a0-0c19bd729242\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.516621 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8jhg\" (UniqueName: \"kubernetes.io/projected/c63dca02-9db5-41e7-90a0-0c19bd729242-kube-api-access-b8jhg\") pod \"c63dca02-9db5-41e7-90a0-0c19bd729242\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.516700 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-config\") pod \"c63dca02-9db5-41e7-90a0-0c19bd729242\" (UID: \"c63dca02-9db5-41e7-90a0-0c19bd729242\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.517184 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.554069 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63dca02-9db5-41e7-90a0-0c19bd729242-kube-api-access-b8jhg" (OuterVolumeSpecName: "kube-api-access-b8jhg") pod "c63dca02-9db5-41e7-90a0-0c19bd729242" (UID: "c63dca02-9db5-41e7-90a0-0c19bd729242"). InnerVolumeSpecName "kube-api-access-b8jhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.558120 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-config" (OuterVolumeSpecName: "config") pod "c63dca02-9db5-41e7-90a0-0c19bd729242" (UID: "c63dca02-9db5-41e7-90a0-0c19bd729242"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.558969 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c63dca02-9db5-41e7-90a0-0c19bd729242" (UID: "c63dca02-9db5-41e7-90a0-0c19bd729242"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.620055 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.620089 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8jhg\" (UniqueName: \"kubernetes.io/projected/c63dca02-9db5-41e7-90a0-0c19bd729242-kube-api-access-b8jhg\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.620100 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c63dca02-9db5-41e7-90a0-0c19bd729242-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.677758 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.825296 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-combined-ca-bundle\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.825345 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-config-data\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.825417 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-logs\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.826631 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-logs" (OuterVolumeSpecName: "logs") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.826696 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-scripts\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.827079 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd5gt\" (UniqueName: \"kubernetes.io/projected/f2262533-70e8-4bb7-80b4-9576b13ab2a5-kube-api-access-bd5gt\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.827182 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.827662 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.827390 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-httpd-run\") pod \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\" (UID: \"f2262533-70e8-4bb7-80b4-9576b13ab2a5\") " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.828405 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.828445 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2262533-70e8-4bb7-80b4-9576b13ab2a5-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.832271 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-scripts" (OuterVolumeSpecName: "scripts") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.834088 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.836192 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2262533-70e8-4bb7-80b4-9576b13ab2a5-kube-api-access-bd5gt" (OuterVolumeSpecName: "kube-api-access-bd5gt") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "kube-api-access-bd5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.873627 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.899984 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-config-data" (OuterVolumeSpecName: "config-data") pod "f2262533-70e8-4bb7-80b4-9576b13ab2a5" (UID: "f2262533-70e8-4bb7-80b4-9576b13ab2a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.930665 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.930711 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.930723 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2262533-70e8-4bb7-80b4-9576b13ab2a5-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.930734 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd5gt\" (UniqueName: \"kubernetes.io/projected/f2262533-70e8-4bb7-80b4-9576b13ab2a5-kube-api-access-bd5gt\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.930779 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 14 15:10:13 crc kubenswrapper[4860]: I1014 15:10:13.947539 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.032139 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.040807 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"147336fc-14ab-4c07-8da5-9dc29f2be3d4","Type":"ContainerDied","Data":"e18fb086acbbfec2055b07287f27240c308e732af7f99d0dc0537c6cae629b69"} Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.040854 4860 scope.go:117] "RemoveContainer" containerID="0dc4b64bad4b46a97e43724cbf2c515ad1df95f59be4251e69f7f4a7fe455462" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.040965 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.047733 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dhd74" event={"ID":"c63dca02-9db5-41e7-90a0-0c19bd729242","Type":"ContainerDied","Data":"d52c990c19b00f963efa7c0aa2a8b4b9552db80fb9340f1ce617e91e9a787773"} Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.047765 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d52c990c19b00f963efa7c0aa2a8b4b9552db80fb9340f1ce617e91e9a787773" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.047758 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dhd74" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.059727 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b84e0757-6fba-44cd-a37d-0e7c06eab0e4","Type":"ContainerStarted","Data":"2001b635f398fb96fa37e401970bac857fd553d03526ea4c5f1036a1eda74ec2"} Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.130991 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerID="ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92" exitCode=0 Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.131172 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerID="53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e" exitCode=143 Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.131252 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2262533-70e8-4bb7-80b4-9576b13ab2a5","Type":"ContainerDied","Data":"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92"} Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.131311 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.131327 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2262533-70e8-4bb7-80b4-9576b13ab2a5","Type":"ContainerDied","Data":"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e"} Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.131820 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2262533-70e8-4bb7-80b4-9576b13ab2a5","Type":"ContainerDied","Data":"6f466709da421e53eca80f334497d37b0aad7ada049b016cac42377a27e17a4b"} Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212088 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lnjrn"] Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212446 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-httpd" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212458 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-httpd" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212470 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-log" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212476 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-log" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212496 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-log" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212502 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-log" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212514 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="init" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212519 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="init" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212531 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212536 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212546 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-httpd" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212552 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-httpd" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.212562 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63dca02-9db5-41e7-90a0-0c19bd729242" containerName="neutron-db-sync" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212567 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63dca02-9db5-41e7-90a0-0c19bd729242" containerName="neutron-db-sync" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212745 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-httpd" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212764 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" containerName="glance-log" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212777 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="153ddd28-cece-4e22-956e-421b65491e15" containerName="dnsmasq-dns" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212787 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-httpd" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212797 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" containerName="glance-log" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.212803 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63dca02-9db5-41e7-90a0-0c19bd729242" containerName="neutron-db-sync" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.216909 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.238084 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lnjrn"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.254572 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.262357 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.269424 4860 scope.go:117] "RemoveContainer" containerID="c66d4ebe14789fc6534efb70e5d9d3a4f0baf278dcd10a2040282b04c2aef229" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.291109 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.292687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.298907 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.299166 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.299274 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.300507 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.303310 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-g6hpd" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.318211 4860 scope.go:117] "RemoveContainer" containerID="ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.318322 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.325466 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.341056 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.341117 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.341197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.341223 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxntk\" (UniqueName: \"kubernetes.io/projected/0ab412ac-4ad8-4281-a48a-50c957b45ce2-kube-api-access-dxntk\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.341257 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-config\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.341356 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.379174 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.380764 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.398573 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.398924 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.416130 4860 scope.go:117] "RemoveContainer" containerID="53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.422457 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dbc5f5f64-m4tp9"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.423916 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.430534 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7nlmd" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.430722 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.430855 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.430979 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.431369 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.442284 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dbc5f5f64-m4tp9"] Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443185 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443222 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443243 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxntk\" (UniqueName: \"kubernetes.io/projected/0ab412ac-4ad8-4281-a48a-50c957b45ce2-kube-api-access-dxntk\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-config\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443285 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443300 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443317 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443339 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzbm\" (UniqueName: \"kubernetes.io/projected/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-kube-api-access-bvzbm\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443363 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443394 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443462 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443479 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.443495 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.444426 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.444718 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.445019 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.445246 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.446127 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-config\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.467567 4860 scope.go:117] "RemoveContainer" containerID="ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.472004 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92\": container with ID starting with ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92 not found: ID does not exist" containerID="ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.472075 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92"} err="failed to get container status \"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92\": rpc error: code = NotFound desc = could not find container \"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92\": container with ID starting with ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92 not found: ID does not exist" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.472102 4860 scope.go:117] "RemoveContainer" containerID="53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e" Oct 14 15:10:14 crc kubenswrapper[4860]: E1014 15:10:14.472971 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e\": container with ID starting with 53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e not found: ID does not exist" containerID="53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.472999 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e"} err="failed to get container status \"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e\": rpc error: code = NotFound desc = could not find container \"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e\": container with ID starting with 53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e not found: ID does not exist" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.473014 4860 scope.go:117] "RemoveContainer" containerID="ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.484479 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxntk\" (UniqueName: \"kubernetes.io/projected/0ab412ac-4ad8-4281-a48a-50c957b45ce2-kube-api-access-dxntk\") pod \"dnsmasq-dns-55f844cf75-lnjrn\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.489683 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92"} err="failed to get container status \"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92\": rpc error: code = NotFound desc = could not find container \"ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92\": container with ID starting with ccf335c99e4cbabd3e62ac42a11468609175aa43ecf91522e3f2ac1a7e4c1c92 not found: ID does not exist" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.489716 4860 scope.go:117] "RemoveContainer" containerID="53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.494502 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e"} err="failed to get container status \"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e\": rpc error: code = NotFound desc = could not find container \"53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e\": container with ID starting with 53254598f8519c06ae406da8a332fe24c9efac8ff1f99a40085711fa3396a14e not found: ID does not exist" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551608 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzbm\" (UniqueName: \"kubernetes.io/projected/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-kube-api-access-bvzbm\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551654 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551672 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551692 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-logs\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551720 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-httpd-config\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551750 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551785 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-ovndb-tls-certs\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551810 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551834 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-config\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwz8\" (UniqueName: \"kubernetes.io/projected/129a5016-7ba9-4901-abe0-9531c4129a99-kube-api-access-mgwz8\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551907 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551941 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-combined-ca-bundle\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.551979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552001 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552044 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552074 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfth6\" (UniqueName: \"kubernetes.io/projected/1deda631-ca4a-40fe-95ce-a2c602baa9e7-kube-api-access-zfth6\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552100 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552121 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552147 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552168 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552192 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.552648 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.560539 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.560938 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.567597 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.575552 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.580342 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.583367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.585377 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.587652 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzbm\" (UniqueName: \"kubernetes.io/projected/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-kube-api-access-bvzbm\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.609911 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.655967 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656070 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfth6\" (UniqueName: \"kubernetes.io/projected/1deda631-ca4a-40fe-95ce-a2c602baa9e7-kube-api-access-zfth6\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656103 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656129 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656165 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656183 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656200 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-logs\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656218 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-httpd-config\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656249 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-ovndb-tls-certs\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656280 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-config\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656289 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwz8\" (UniqueName: \"kubernetes.io/projected/129a5016-7ba9-4901-abe0-9531c4129a99-kube-api-access-mgwz8\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.656381 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-combined-ca-bundle\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.664688 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.665643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.668152 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-combined-ca-bundle\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.670779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-logs\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.681883 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.692691 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-ovndb-tls-certs\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.734190 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.739209 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-httpd-config\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.740828 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.741320 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.742117 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwz8\" (UniqueName: \"kubernetes.io/projected/129a5016-7ba9-4901-abe0-9531c4129a99-kube-api-access-mgwz8\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.743989 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfth6\" (UniqueName: \"kubernetes.io/projected/1deda631-ca4a-40fe-95ce-a2c602baa9e7-kube-api-access-zfth6\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.748925 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-config\") pod \"neutron-5dbc5f5f64-m4tp9\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:14 crc kubenswrapper[4860]: I1014 15:10:14.777448 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " pod="openstack/glance-default-external-api-0" Oct 14 15:10:15 crc kubenswrapper[4860]: I1014 15:10:15.031596 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:10:15 crc kubenswrapper[4860]: I1014 15:10:15.053408 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:15 crc kubenswrapper[4860]: I1014 15:10:15.108930 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147336fc-14ab-4c07-8da5-9dc29f2be3d4" path="/var/lib/kubelet/pods/147336fc-14ab-4c07-8da5-9dc29f2be3d4/volumes" Oct 14 15:10:15 crc kubenswrapper[4860]: I1014 15:10:15.109834 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2262533-70e8-4bb7-80b4-9576b13ab2a5" path="/var/lib/kubelet/pods/f2262533-70e8-4bb7-80b4-9576b13ab2a5/volumes" Oct 14 15:10:15 crc kubenswrapper[4860]: I1014 15:10:15.576494 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:10:15 crc kubenswrapper[4860]: I1014 15:10:15.596105 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lnjrn"] Oct 14 15:10:15 crc kubenswrapper[4860]: W1014 15:10:15.635602 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab412ac_4ad8_4281_a48a_50c957b45ce2.slice/crio-0469b911d83e0ad96bf1d0813d483f40a8e1aee58e833aeed1e8b5f5af1f20b0 WatchSource:0}: Error finding container 0469b911d83e0ad96bf1d0813d483f40a8e1aee58e833aeed1e8b5f5af1f20b0: Status 404 returned error can't find the container with id 0469b911d83e0ad96bf1d0813d483f40a8e1aee58e833aeed1e8b5f5af1f20b0 Oct 14 15:10:16 crc kubenswrapper[4860]: I1014 15:10:16.012123 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dbc5f5f64-m4tp9"] Oct 14 15:10:16 crc kubenswrapper[4860]: I1014 15:10:16.211431 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:10:16 crc kubenswrapper[4860]: I1014 15:10:16.252821 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1deda631-ca4a-40fe-95ce-a2c602baa9e7","Type":"ContainerStarted","Data":"308438ecf338dad7e9e9636a1b00e163376b94d56b45a1ee44a494b8f3714889"} Oct 14 15:10:16 crc kubenswrapper[4860]: I1014 15:10:16.258411 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dbc5f5f64-m4tp9" event={"ID":"129a5016-7ba9-4901-abe0-9531c4129a99","Type":"ContainerStarted","Data":"b8612bb4e3267f3affcaf3fdcc366045c787b17a551eba8df99313dae35da300"} Oct 14 15:10:16 crc kubenswrapper[4860]: I1014 15:10:16.268347 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" event={"ID":"0ab412ac-4ad8-4281-a48a-50c957b45ce2","Type":"ContainerStarted","Data":"0469b911d83e0ad96bf1d0813d483f40a8e1aee58e833aeed1e8b5f5af1f20b0"} Oct 14 15:10:16 crc kubenswrapper[4860]: I1014 15:10:16.272950 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb464cdf-6fb0-4ed3-9c3d-2a505478def4","Type":"ContainerStarted","Data":"9ab8cc2877269083d833b356fe8a698a48148cca15b85ee1a014da197de31c63"} Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.012986 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b4bf5b577-882p6"] Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.014796 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.021548 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.021852 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-config\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146069 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56jm\" (UniqueName: \"kubernetes.io/projected/87973523-835b-4676-babb-8ed122fa8b93-kube-api-access-s56jm\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146148 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-combined-ca-bundle\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146167 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-ovndb-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146217 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-httpd-config\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146241 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-public-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.146312 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-internal-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.193525 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b4bf5b577-882p6"] Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.250842 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-combined-ca-bundle\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.250968 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-ovndb-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.251102 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-httpd-config\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.251171 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-public-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.251253 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-internal-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.251315 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-config\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.251393 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s56jm\" (UniqueName: \"kubernetes.io/projected/87973523-835b-4676-babb-8ed122fa8b93-kube-api-access-s56jm\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.258647 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-httpd-config\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.259915 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-ovndb-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.260719 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-combined-ca-bundle\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.262935 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-internal-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.271891 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-config\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.276878 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87973523-835b-4676-babb-8ed122fa8b93-public-tls-certs\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.295436 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56jm\" (UniqueName: \"kubernetes.io/projected/87973523-835b-4676-babb-8ed122fa8b93-kube-api-access-s56jm\") pod \"neutron-b4bf5b577-882p6\" (UID: \"87973523-835b-4676-babb-8ed122fa8b93\") " pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.364423 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb464cdf-6fb0-4ed3-9c3d-2a505478def4","Type":"ContainerStarted","Data":"9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42"} Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.375119 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1deda631-ca4a-40fe-95ce-a2c602baa9e7","Type":"ContainerStarted","Data":"fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9"} Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.380963 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dbc5f5f64-m4tp9" event={"ID":"129a5016-7ba9-4901-abe0-9531c4129a99","Type":"ContainerStarted","Data":"2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606"} Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.391308 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerID="3500372be9c1e05fa6d553a22fbe27c475d7d7f4487714629cd8f9f464c8a821" exitCode=0 Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.391374 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" event={"ID":"0ab412ac-4ad8-4281-a48a-50c957b45ce2","Type":"ContainerDied","Data":"3500372be9c1e05fa6d553a22fbe27c475d7d7f4487714629cd8f9f464c8a821"} Oct 14 15:10:17 crc kubenswrapper[4860]: I1014 15:10:17.563744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.191122 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b4bf5b577-882p6"] Oct 14 15:10:18 crc kubenswrapper[4860]: W1014 15:10:18.209564 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87973523_835b_4676_babb_8ed122fa8b93.slice/crio-6f42d6c440eb029926e2a76a36cb291539eafd65c481206cf85ddbe7bb5a2cd6 WatchSource:0}: Error finding container 6f42d6c440eb029926e2a76a36cb291539eafd65c481206cf85ddbe7bb5a2cd6: Status 404 returned error can't find the container with id 6f42d6c440eb029926e2a76a36cb291539eafd65c481206cf85ddbe7bb5a2cd6 Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.417630 4860 generic.go:334] "Generic (PLEG): container finished" podID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" containerID="e8f98dd80c026cf6fee32e32aa25db1f319ad8c9ece42a632eccf1b99c7e00c5" exitCode=0 Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.417737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbsq9" event={"ID":"3324c4e1-abc6-473d-8d14-28d41a4e27a8","Type":"ContainerDied","Data":"e8f98dd80c026cf6fee32e32aa25db1f319ad8c9ece42a632eccf1b99c7e00c5"} Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.443779 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" event={"ID":"0ab412ac-4ad8-4281-a48a-50c957b45ce2","Type":"ContainerStarted","Data":"e63b0e30b1cbf1822a4f5b87bf9e67c8fb68a7a4e2bb329552e7aa90e7cc6767"} Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.443864 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.451950 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb464cdf-6fb0-4ed3-9c3d-2a505478def4","Type":"ContainerStarted","Data":"f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4"} Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.454683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1deda631-ca4a-40fe-95ce-a2c602baa9e7","Type":"ContainerStarted","Data":"6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108"} Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.456457 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b4bf5b577-882p6" event={"ID":"87973523-835b-4676-babb-8ed122fa8b93","Type":"ContainerStarted","Data":"6f42d6c440eb029926e2a76a36cb291539eafd65c481206cf85ddbe7bb5a2cd6"} Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.463081 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" podStartSLOduration=4.463056866 podStartE2EDuration="4.463056866s" podCreationTimestamp="2025-10-14 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:18.460464944 +0000 UTC m=+1280.047248393" watchObservedRunningTime="2025-10-14 15:10:18.463056866 +0000 UTC m=+1280.049840315" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.463116 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dbc5f5f64-m4tp9" event={"ID":"129a5016-7ba9-4901-abe0-9531c4129a99","Type":"ContainerStarted","Data":"5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f"} Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.463931 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.484409 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.484391807 podStartE2EDuration="4.484391807s" podCreationTimestamp="2025-10-14 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:18.482635653 +0000 UTC m=+1280.069419122" watchObservedRunningTime="2025-10-14 15:10:18.484391807 +0000 UTC m=+1280.071175256" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.546364 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.546347915 podStartE2EDuration="4.546347915s" podCreationTimestamp="2025-10-14 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:18.515716789 +0000 UTC m=+1280.102500238" watchObservedRunningTime="2025-10-14 15:10:18.546347915 +0000 UTC m=+1280.133131364" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.552046 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dbc5f5f64-m4tp9" podStartSLOduration=4.552016013 podStartE2EDuration="4.552016013s" podCreationTimestamp="2025-10-14 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:18.54448363 +0000 UTC m=+1280.131267079" watchObservedRunningTime="2025-10-14 15:10:18.552016013 +0000 UTC m=+1280.138799462" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.643193 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.643309 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.819302 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:10:18 crc kubenswrapper[4860]: I1014 15:10:18.819911 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:10:19 crc kubenswrapper[4860]: I1014 15:10:19.473697 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b4bf5b577-882p6" event={"ID":"87973523-835b-4676-babb-8ed122fa8b93","Type":"ContainerStarted","Data":"e01d616335fae2a389fc742ef5430fff19be6eafb53ef7de074f6d60107bb917"} Oct 14 15:10:19 crc kubenswrapper[4860]: I1014 15:10:19.473751 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b4bf5b577-882p6" event={"ID":"87973523-835b-4676-babb-8ed122fa8b93","Type":"ContainerStarted","Data":"5b580261a392a0b239c94f10590d6a3034b5987d8118dcf3fdd115752d9ea1b6"} Oct 14 15:10:19 crc kubenswrapper[4860]: I1014 15:10:19.495936 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b4bf5b577-882p6" podStartSLOduration=3.495918471 podStartE2EDuration="3.495918471s" podCreationTimestamp="2025-10-14 15:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:19.490475139 +0000 UTC m=+1281.077258578" watchObservedRunningTime="2025-10-14 15:10:19.495918471 +0000 UTC m=+1281.082701920" Oct 14 15:10:20 crc kubenswrapper[4860]: I1014 15:10:20.482509 4860 generic.go:334] "Generic (PLEG): container finished" podID="8616715a-5ecc-4bec-8e55-14626927cce5" containerID="6469106c14d5090a665c2bbd390f714e5f630ed44a6b1e2a12bb59c850325ed6" exitCode=0 Oct 14 15:10:20 crc kubenswrapper[4860]: I1014 15:10:20.483164 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wkzht" event={"ID":"8616715a-5ecc-4bec-8e55-14626927cce5","Type":"ContainerDied","Data":"6469106c14d5090a665c2bbd390f714e5f630ed44a6b1e2a12bb59c850325ed6"} Oct 14 15:10:20 crc kubenswrapper[4860]: I1014 15:10:20.483213 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:10:21 crc kubenswrapper[4860]: I1014 15:10:21.445134 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.578207 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.639827 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x4p9t"] Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.640166 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="dnsmasq-dns" containerID="cri-o://7d9ffd06c86853e24817c32040c554d316a0a24507afaa4edc1f5d1345f88ee3" gracePeriod=10 Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.666378 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.666427 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.814707 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:24 crc kubenswrapper[4860]: I1014 15:10:24.815097 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.032750 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.032792 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.072325 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.079910 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.525662 4860 generic.go:334] "Generic (PLEG): container finished" podID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerID="7d9ffd06c86853e24817c32040c554d316a0a24507afaa4edc1f5d1345f88ee3" exitCode=0 Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.525737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" event={"ID":"6252f636-188e-4b89-8092-3ea73fe73fbe","Type":"ContainerDied","Data":"7d9ffd06c86853e24817c32040c554d316a0a24507afaa4edc1f5d1345f88ee3"} Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.526403 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.526423 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.526433 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:25 crc kubenswrapper[4860]: I1014 15:10:25.526441 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 15:10:27 crc kubenswrapper[4860]: I1014 15:10:27.811234 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Oct 14 15:10:28 crc kubenswrapper[4860]: I1014 15:10:28.642483 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:10:28 crc kubenswrapper[4860]: I1014 15:10:28.819955 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8795558b4-cgsrj" podUID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Oct 14 15:10:29 crc kubenswrapper[4860]: I1014 15:10:29.245108 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:10:29 crc kubenswrapper[4860]: I1014 15:10:29.245166 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:10:29 crc kubenswrapper[4860]: I1014 15:10:29.245205 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:10:29 crc kubenswrapper[4860]: I1014 15:10:29.245899 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e8c816816ac6aa5296d7e14d541eea35fcda7f2a88ab8bc1a07386f6df3b2dd"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:10:29 crc kubenswrapper[4860]: I1014 15:10:29.245968 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://8e8c816816ac6aa5296d7e14d541eea35fcda7f2a88ab8bc1a07386f6df3b2dd" gracePeriod=600 Oct 14 15:10:30 crc kubenswrapper[4860]: I1014 15:10:30.588843 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="8e8c816816ac6aa5296d7e14d541eea35fcda7f2a88ab8bc1a07386f6df3b2dd" exitCode=0 Oct 14 15:10:30 crc kubenswrapper[4860]: I1014 15:10:30.588913 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"8e8c816816ac6aa5296d7e14d541eea35fcda7f2a88ab8bc1a07386f6df3b2dd"} Oct 14 15:10:30 crc kubenswrapper[4860]: I1014 15:10:30.589366 4860 scope.go:117] "RemoveContainer" containerID="7c40d8caa5e52e82b9243eb5410bd9850080abe3ed1c63b68f1d1d3b4330efe8" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.158020 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.177877 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbsq9" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.253662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mph5q\" (UniqueName: \"kubernetes.io/projected/8616715a-5ecc-4bec-8e55-14626927cce5-kube-api-access-mph5q\") pod \"8616715a-5ecc-4bec-8e55-14626927cce5\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.253727 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-fernet-keys\") pod \"8616715a-5ecc-4bec-8e55-14626927cce5\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.253768 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-scripts\") pod \"8616715a-5ecc-4bec-8e55-14626927cce5\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.253870 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-config-data\") pod \"8616715a-5ecc-4bec-8e55-14626927cce5\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.253934 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-combined-ca-bundle\") pod \"8616715a-5ecc-4bec-8e55-14626927cce5\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.253998 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-credential-keys\") pod \"8616715a-5ecc-4bec-8e55-14626927cce5\" (UID: \"8616715a-5ecc-4bec-8e55-14626927cce5\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.267989 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-scripts" (OuterVolumeSpecName: "scripts") pod "8616715a-5ecc-4bec-8e55-14626927cce5" (UID: "8616715a-5ecc-4bec-8e55-14626927cce5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.268358 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8616715a-5ecc-4bec-8e55-14626927cce5-kube-api-access-mph5q" (OuterVolumeSpecName: "kube-api-access-mph5q") pod "8616715a-5ecc-4bec-8e55-14626927cce5" (UID: "8616715a-5ecc-4bec-8e55-14626927cce5"). InnerVolumeSpecName "kube-api-access-mph5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.282824 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8616715a-5ecc-4bec-8e55-14626927cce5" (UID: "8616715a-5ecc-4bec-8e55-14626927cce5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.289765 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8616715a-5ecc-4bec-8e55-14626927cce5" (UID: "8616715a-5ecc-4bec-8e55-14626927cce5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.344879 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-config-data" (OuterVolumeSpecName: "config-data") pod "8616715a-5ecc-4bec-8e55-14626927cce5" (UID: "8616715a-5ecc-4bec-8e55-14626927cce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.361302 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8616715a-5ecc-4bec-8e55-14626927cce5" (UID: "8616715a-5ecc-4bec-8e55-14626927cce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.363573 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-config-data\") pod \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.363733 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vs4p\" (UniqueName: \"kubernetes.io/projected/3324c4e1-abc6-473d-8d14-28d41a4e27a8-kube-api-access-6vs4p\") pod \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.363752 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-scripts\") pod \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.363782 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3324c4e1-abc6-473d-8d14-28d41a4e27a8-logs\") pod \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.363837 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-combined-ca-bundle\") pod \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\" (UID: \"3324c4e1-abc6-473d-8d14-28d41a4e27a8\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.365289 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.365304 4860 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.365315 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mph5q\" (UniqueName: \"kubernetes.io/projected/8616715a-5ecc-4bec-8e55-14626927cce5-kube-api-access-mph5q\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.365324 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.365332 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.365341 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8616715a-5ecc-4bec-8e55-14626927cce5-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.370164 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3324c4e1-abc6-473d-8d14-28d41a4e27a8-kube-api-access-6vs4p" (OuterVolumeSpecName: "kube-api-access-6vs4p") pod "3324c4e1-abc6-473d-8d14-28d41a4e27a8" (UID: "3324c4e1-abc6-473d-8d14-28d41a4e27a8"). InnerVolumeSpecName "kube-api-access-6vs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.370403 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3324c4e1-abc6-473d-8d14-28d41a4e27a8-logs" (OuterVolumeSpecName: "logs") pod "3324c4e1-abc6-473d-8d14-28d41a4e27a8" (UID: "3324c4e1-abc6-473d-8d14-28d41a4e27a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.372195 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-scripts" (OuterVolumeSpecName: "scripts") pod "3324c4e1-abc6-473d-8d14-28d41a4e27a8" (UID: "3324c4e1-abc6-473d-8d14-28d41a4e27a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.414861 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-config-data" (OuterVolumeSpecName: "config-data") pod "3324c4e1-abc6-473d-8d14-28d41a4e27a8" (UID: "3324c4e1-abc6-473d-8d14-28d41a4e27a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.421156 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3324c4e1-abc6-473d-8d14-28d41a4e27a8" (UID: "3324c4e1-abc6-473d-8d14-28d41a4e27a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.467307 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vs4p\" (UniqueName: \"kubernetes.io/projected/3324c4e1-abc6-473d-8d14-28d41a4e27a8-kube-api-access-6vs4p\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.467336 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.467345 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3324c4e1-abc6-473d-8d14-28d41a4e27a8-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.467354 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.467379 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3324c4e1-abc6-473d-8d14-28d41a4e27a8-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.553367 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.606834 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbsq9" event={"ID":"3324c4e1-abc6-473d-8d14-28d41a4e27a8","Type":"ContainerDied","Data":"fa9f43088f302010bb9e2b081232f6b597b6569796f3ceff14b82227fb242d9e"} Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.606876 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa9f43088f302010bb9e2b081232f6b597b6569796f3ceff14b82227fb242d9e" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.606937 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbsq9" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.610384 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wkzht" event={"ID":"8616715a-5ecc-4bec-8e55-14626927cce5","Type":"ContainerDied","Data":"70a89835a10582ee7d529019bf0feb24786cfe1cc7e4b22db749c51654191b99"} Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.610422 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a89835a10582ee7d529019bf0feb24786cfe1cc7e4b22db749c51654191b99" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.610484 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wkzht" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.613525 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" event={"ID":"6252f636-188e-4b89-8092-3ea73fe73fbe","Type":"ContainerDied","Data":"81bbd94e813074cd6372482d49cfb7712e3e769a9ffda1e7ae07a351c45ed2ba"} Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.613590 4860 scope.go:117] "RemoveContainer" containerID="7d9ffd06c86853e24817c32040c554d316a0a24507afaa4edc1f5d1345f88ee3" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.613719 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x4p9t" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.669797 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-svc\") pod \"6252f636-188e-4b89-8092-3ea73fe73fbe\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.670149 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtc6q\" (UniqueName: \"kubernetes.io/projected/6252f636-188e-4b89-8092-3ea73fe73fbe-kube-api-access-vtc6q\") pod \"6252f636-188e-4b89-8092-3ea73fe73fbe\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.670184 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-config\") pod \"6252f636-188e-4b89-8092-3ea73fe73fbe\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.670255 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-nb\") pod \"6252f636-188e-4b89-8092-3ea73fe73fbe\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.670378 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-sb\") pod \"6252f636-188e-4b89-8092-3ea73fe73fbe\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.670421 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-swift-storage-0\") pod \"6252f636-188e-4b89-8092-3ea73fe73fbe\" (UID: \"6252f636-188e-4b89-8092-3ea73fe73fbe\") " Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.674068 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6252f636-188e-4b89-8092-3ea73fe73fbe-kube-api-access-vtc6q" (OuterVolumeSpecName: "kube-api-access-vtc6q") pod "6252f636-188e-4b89-8092-3ea73fe73fbe" (UID: "6252f636-188e-4b89-8092-3ea73fe73fbe"). InnerVolumeSpecName "kube-api-access-vtc6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: E1014 15:10:32.676128 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Oct 14 15:10:32 crc kubenswrapper[4860]: E1014 15:10:32.676264 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qv95m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b84e0757-6fba-44cd-a37d-0e7c06eab0e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.677946 4860 scope.go:117] "RemoveContainer" containerID="834dd2e7f80cc544d09752be20a68811aa81d0c5cda68707d1ed8c568133b827" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.715010 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-config" (OuterVolumeSpecName: "config") pod "6252f636-188e-4b89-8092-3ea73fe73fbe" (UID: "6252f636-188e-4b89-8092-3ea73fe73fbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.718387 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6252f636-188e-4b89-8092-3ea73fe73fbe" (UID: "6252f636-188e-4b89-8092-3ea73fe73fbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.720578 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6252f636-188e-4b89-8092-3ea73fe73fbe" (UID: "6252f636-188e-4b89-8092-3ea73fe73fbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.734635 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6252f636-188e-4b89-8092-3ea73fe73fbe" (UID: "6252f636-188e-4b89-8092-3ea73fe73fbe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.738681 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6252f636-188e-4b89-8092-3ea73fe73fbe" (UID: "6252f636-188e-4b89-8092-3ea73fe73fbe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.772197 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.772226 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.772235 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtc6q\" (UniqueName: \"kubernetes.io/projected/6252f636-188e-4b89-8092-3ea73fe73fbe-kube-api-access-vtc6q\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.772245 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.772255 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.772263 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6252f636-188e-4b89-8092-3ea73fe73fbe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.945713 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x4p9t"] Oct 14 15:10:32 crc kubenswrapper[4860]: I1014 15:10:32.952114 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x4p9t"] Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.072124 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" path="/var/lib/kubelet/pods/6252f636-188e-4b89-8092-3ea73fe73fbe/volumes" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.378342 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79ffbddbb5-96v5k"] Oct 14 15:10:33 crc kubenswrapper[4860]: E1014 15:10:33.378752 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" containerName="placement-db-sync" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.378775 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" containerName="placement-db-sync" Oct 14 15:10:33 crc kubenswrapper[4860]: E1014 15:10:33.378793 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="dnsmasq-dns" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.378800 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="dnsmasq-dns" Oct 14 15:10:33 crc kubenswrapper[4860]: E1014 15:10:33.378818 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8616715a-5ecc-4bec-8e55-14626927cce5" containerName="keystone-bootstrap" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.378823 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8616715a-5ecc-4bec-8e55-14626927cce5" containerName="keystone-bootstrap" Oct 14 15:10:33 crc kubenswrapper[4860]: E1014 15:10:33.378836 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="init" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.378842 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="init" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.379007 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" containerName="placement-db-sync" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.379020 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8616715a-5ecc-4bec-8e55-14626927cce5" containerName="keystone-bootstrap" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.379046 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6252f636-188e-4b89-8092-3ea73fe73fbe" containerName="dnsmasq-dns" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.379633 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.384767 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.385215 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.385533 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.385767 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.387362 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xsgm5" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.387594 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.399879 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64cf955b6-w5x5t"] Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.401349 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.414594 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79ffbddbb5-96v5k"] Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.441320 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-f5hh4" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.441660 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.441922 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.442983 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.443283 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.484706 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-internal-tls-certs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.484964 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-config-data\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.485157 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-config-data\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-public-tls-certs\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494621 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-public-tls-certs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494662 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-combined-ca-bundle\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494720 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47q5v\" (UniqueName: \"kubernetes.io/projected/17bac919-7f29-4225-967b-1001b22075b4-kube-api-access-47q5v\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494808 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965ql\" (UniqueName: \"kubernetes.io/projected/ab2aac74-c03a-4d14-a332-ab84606c9864-kube-api-access-965ql\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494846 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-scripts\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494884 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-credential-keys\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494938 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-fernet-keys\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.494971 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-internal-tls-certs\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.495016 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2aac74-c03a-4d14-a332-ab84606c9864-logs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.495123 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-combined-ca-bundle\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.495155 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-scripts\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.491302 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64cf955b6-w5x5t"] Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596658 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-config-data\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596780 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-config-data\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596810 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-public-tls-certs\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596843 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-public-tls-certs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596872 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-combined-ca-bundle\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596909 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47q5v\" (UniqueName: \"kubernetes.io/projected/17bac919-7f29-4225-967b-1001b22075b4-kube-api-access-47q5v\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596952 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-965ql\" (UniqueName: \"kubernetes.io/projected/ab2aac74-c03a-4d14-a332-ab84606c9864-kube-api-access-965ql\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.596981 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-scripts\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597007 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-credential-keys\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597057 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-fernet-keys\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597088 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-internal-tls-certs\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597119 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2aac74-c03a-4d14-a332-ab84606c9864-logs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597156 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-combined-ca-bundle\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597183 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-scripts\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.597210 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-internal-tls-certs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.605291 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-internal-tls-certs\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.606481 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-scripts\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.610534 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2aac74-c03a-4d14-a332-ab84606c9864-logs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.611660 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-public-tls-certs\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.611859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-combined-ca-bundle\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.612315 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-public-tls-certs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.613129 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-credential-keys\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.616406 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-internal-tls-certs\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.616488 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-config-data\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.616521 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-fernet-keys\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.616749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-combined-ca-bundle\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.625281 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642"} Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.626327 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2aac74-c03a-4d14-a332-ab84606c9864-scripts\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.628436 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bac919-7f29-4225-967b-1001b22075b4-config-data\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.631723 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47q5v\" (UniqueName: \"kubernetes.io/projected/17bac919-7f29-4225-967b-1001b22075b4-kube-api-access-47q5v\") pod \"keystone-79ffbddbb5-96v5k\" (UID: \"17bac919-7f29-4225-967b-1001b22075b4\") " pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.640544 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-965ql\" (UniqueName: \"kubernetes.io/projected/ab2aac74-c03a-4d14-a332-ab84606c9864-kube-api-access-965ql\") pod \"placement-64cf955b6-w5x5t\" (UID: \"ab2aac74-c03a-4d14-a332-ab84606c9864\") " pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.782653 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:33 crc kubenswrapper[4860]: I1014 15:10:33.792105 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:34 crc kubenswrapper[4860]: I1014 15:10:34.344312 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79ffbddbb5-96v5k"] Oct 14 15:10:34 crc kubenswrapper[4860]: I1014 15:10:34.419853 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64cf955b6-w5x5t"] Oct 14 15:10:34 crc kubenswrapper[4860]: I1014 15:10:34.635135 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64cf955b6-w5x5t" event={"ID":"ab2aac74-c03a-4d14-a332-ab84606c9864","Type":"ContainerStarted","Data":"2958b3988b5112ff7b1f5bb0704b62c331e1af0b9a3b7391b3e61a33f0823adb"} Oct 14 15:10:34 crc kubenswrapper[4860]: I1014 15:10:34.636949 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79ffbddbb5-96v5k" event={"ID":"17bac919-7f29-4225-967b-1001b22075b4","Type":"ContainerStarted","Data":"ca7dfbaa53d343219023fea478e8f9d57941a6294e4b35cae3786ee9aed63b66"} Oct 14 15:10:35 crc kubenswrapper[4860]: I1014 15:10:35.648873 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79ffbddbb5-96v5k" event={"ID":"17bac919-7f29-4225-967b-1001b22075b4","Type":"ContainerStarted","Data":"feff84aac634693519fb26a8288f312fa0598b74354a23e195a9e6fed0df5f99"} Oct 14 15:10:35 crc kubenswrapper[4860]: I1014 15:10:35.649346 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:10:35 crc kubenswrapper[4860]: I1014 15:10:35.650580 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64cf955b6-w5x5t" event={"ID":"ab2aac74-c03a-4d14-a332-ab84606c9864","Type":"ContainerStarted","Data":"037684a964e407dd2768421508e2122bf4748654b026f8fe566201b3ef8cf203"} Oct 14 15:10:35 crc kubenswrapper[4860]: I1014 15:10:35.675740 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79ffbddbb5-96v5k" podStartSLOduration=2.675719718 podStartE2EDuration="2.675719718s" podCreationTimestamp="2025-10-14 15:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:35.6725361 +0000 UTC m=+1297.259319569" watchObservedRunningTime="2025-10-14 15:10:35.675719718 +0000 UTC m=+1297.262503167" Oct 14 15:10:36 crc kubenswrapper[4860]: I1014 15:10:36.662998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64cf955b6-w5x5t" event={"ID":"ab2aac74-c03a-4d14-a332-ab84606c9864","Type":"ContainerStarted","Data":"9cbc96084c99749386e02a4124c5f588a505901a1e2d35c35b3bca13f5a1b797"} Oct 14 15:10:36 crc kubenswrapper[4860]: I1014 15:10:36.663851 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:36 crc kubenswrapper[4860]: I1014 15:10:36.690987 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64cf955b6-w5x5t" podStartSLOduration=3.690964083 podStartE2EDuration="3.690964083s" podCreationTimestamp="2025-10-14 15:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:36.68553054 +0000 UTC m=+1298.272313999" watchObservedRunningTime="2025-10-14 15:10:36.690964083 +0000 UTC m=+1298.277747532" Oct 14 15:10:37 crc kubenswrapper[4860]: I1014 15:10:37.671954 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:10:38 crc kubenswrapper[4860]: I1014 15:10:38.640680 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:10:38 crc kubenswrapper[4860]: I1014 15:10:38.698782 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2247" event={"ID":"f0a3bc02-1357-4751-9496-a41526515867","Type":"ContainerStarted","Data":"5b81caa0fbe5a103584a9a6706463d60e8f0c80f69e219815ce3c43e0ccf8981"} Oct 14 15:10:38 crc kubenswrapper[4860]: I1014 15:10:38.709069 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grpb9" event={"ID":"ca080412-b618-4293-a06d-e0d9a774d36b","Type":"ContainerStarted","Data":"eeeea6721fbd5ee86c2edf8027ffeb6a868461bddeb38914b49777fdd6c5c4dd"} Oct 14 15:10:38 crc kubenswrapper[4860]: I1014 15:10:38.734546 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x2247" podStartSLOduration=4.469282357 podStartE2EDuration="1m12.734519702s" podCreationTimestamp="2025-10-14 15:09:26 +0000 UTC" firstStartedPulling="2025-10-14 15:09:28.948598282 +0000 UTC m=+1230.535381731" lastFinishedPulling="2025-10-14 15:10:37.213835637 +0000 UTC m=+1298.800619076" observedRunningTime="2025-10-14 15:10:38.722503779 +0000 UTC m=+1300.309287228" watchObservedRunningTime="2025-10-14 15:10:38.734519702 +0000 UTC m=+1300.321303161" Oct 14 15:10:38 crc kubenswrapper[4860]: I1014 15:10:38.820615 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8795558b4-cgsrj" podUID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Oct 14 15:10:42 crc kubenswrapper[4860]: I1014 15:10:42.756819 4860 generic.go:334] "Generic (PLEG): container finished" podID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerID="c1523cd43cc6372f8bf1d0026cfba9e4a0296bb3f3015fadf98dba296e94fbbc" exitCode=137 Oct 14 15:10:42 crc kubenswrapper[4860]: I1014 15:10:42.757390 4860 generic.go:334] "Generic (PLEG): container finished" podID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerID="62c06d70cda9431e2f03e4c3f2a0b5a526287c6d550a4f0943cba87bebdf50c8" exitCode=137 Oct 14 15:10:42 crc kubenswrapper[4860]: I1014 15:10:42.756904 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9d676f-t7pss" event={"ID":"531feef3-54a8-4a76-b87f-4fe76d0c7e46","Type":"ContainerDied","Data":"c1523cd43cc6372f8bf1d0026cfba9e4a0296bb3f3015fadf98dba296e94fbbc"} Oct 14 15:10:42 crc kubenswrapper[4860]: I1014 15:10:42.757432 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9d676f-t7pss" event={"ID":"531feef3-54a8-4a76-b87f-4fe76d0c7e46","Type":"ContainerDied","Data":"62c06d70cda9431e2f03e4c3f2a0b5a526287c6d550a4f0943cba87bebdf50c8"} Oct 14 15:10:45 crc kubenswrapper[4860]: I1014 15:10:45.067608 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5dbc5f5f64-m4tp9" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:10:45 crc kubenswrapper[4860]: I1014 15:10:45.070065 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5dbc5f5f64-m4tp9" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:10:45 crc kubenswrapper[4860]: I1014 15:10:45.071178 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5dbc5f5f64-m4tp9" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:10:47 crc kubenswrapper[4860]: I1014 15:10:47.572061 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-b4bf5b577-882p6" podUID="87973523-835b-4676-babb-8ed122fa8b93" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:10:47 crc kubenswrapper[4860]: I1014 15:10:47.572146 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-b4bf5b577-882p6" podUID="87973523-835b-4676-babb-8ed122fa8b93" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:10:47 crc kubenswrapper[4860]: I1014 15:10:47.574786 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b4bf5b577-882p6" podUID="87973523-835b-4676-babb-8ed122fa8b93" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.641160 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.641590 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.642463 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"48c829aeecd60e8eb72c1f7f8f0dd773866393ac607409fd129497c22dfd7dfc"} pod="openstack/horizon-7dd7969c76-f8cq5" containerMessage="Container horizon failed startup probe, will be restarted" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.642501 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" containerID="cri-o://48c829aeecd60e8eb72c1f7f8f0dd773866393ac607409fd129497c22dfd7dfc" gracePeriod=30 Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.818626 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8795558b4-cgsrj" podUID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.818696 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.819397 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"77384f8c762ca369199fe7f2734dfaaa8f59ec6ad97c1602f4bac3fd00f71d13"} pod="openstack/horizon-8795558b4-cgsrj" containerMessage="Container horizon failed startup probe, will be restarted" Oct 14 15:10:48 crc kubenswrapper[4860]: I1014 15:10:48.819432 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8795558b4-cgsrj" podUID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerName="horizon" containerID="cri-o://77384f8c762ca369199fe7f2734dfaaa8f59ec6ad97c1602f4bac3fd00f71d13" gracePeriod=30 Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.103808 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.103954 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.133741 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-grpb9" podStartSLOduration=15.390544648 podStartE2EDuration="1m24.1337204s" podCreationTimestamp="2025-10-14 15:09:26 +0000 UTC" firstStartedPulling="2025-10-14 15:09:28.47249626 +0000 UTC m=+1230.059279709" lastFinishedPulling="2025-10-14 15:10:37.215672012 +0000 UTC m=+1298.802455461" observedRunningTime="2025-10-14 15:10:38.745298934 +0000 UTC m=+1300.332082393" watchObservedRunningTime="2025-10-14 15:10:50.1337204 +0000 UTC m=+1311.720503849" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.141073 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.141176 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.286089 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.340911 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.695670 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.720405 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb2m7\" (UniqueName: \"kubernetes.io/projected/531feef3-54a8-4a76-b87f-4fe76d0c7e46-kube-api-access-fb2m7\") pod \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.720509 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-config-data\") pod \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.720561 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/531feef3-54a8-4a76-b87f-4fe76d0c7e46-logs\") pod \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.720622 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/531feef3-54a8-4a76-b87f-4fe76d0c7e46-horizon-secret-key\") pod \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.720683 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-scripts\") pod \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\" (UID: \"531feef3-54a8-4a76-b87f-4fe76d0c7e46\") " Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.721684 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/531feef3-54a8-4a76-b87f-4fe76d0c7e46-logs" (OuterVolumeSpecName: "logs") pod "531feef3-54a8-4a76-b87f-4fe76d0c7e46" (UID: "531feef3-54a8-4a76-b87f-4fe76d0c7e46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.749965 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531feef3-54a8-4a76-b87f-4fe76d0c7e46-kube-api-access-fb2m7" (OuterVolumeSpecName: "kube-api-access-fb2m7") pod "531feef3-54a8-4a76-b87f-4fe76d0c7e46" (UID: "531feef3-54a8-4a76-b87f-4fe76d0c7e46"). InnerVolumeSpecName "kube-api-access-fb2m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.754280 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-config-data" (OuterVolumeSpecName: "config-data") pod "531feef3-54a8-4a76-b87f-4fe76d0c7e46" (UID: "531feef3-54a8-4a76-b87f-4fe76d0c7e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.756168 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531feef3-54a8-4a76-b87f-4fe76d0c7e46-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "531feef3-54a8-4a76-b87f-4fe76d0c7e46" (UID: "531feef3-54a8-4a76-b87f-4fe76d0c7e46"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.790601 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-scripts" (OuterVolumeSpecName: "scripts") pod "531feef3-54a8-4a76-b87f-4fe76d0c7e46" (UID: "531feef3-54a8-4a76-b87f-4fe76d0c7e46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.821777 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.821802 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/531feef3-54a8-4a76-b87f-4fe76d0c7e46-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.821811 4860 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/531feef3-54a8-4a76-b87f-4fe76d0c7e46-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.821822 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531feef3-54a8-4a76-b87f-4fe76d0c7e46-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.821830 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb2m7\" (UniqueName: \"kubernetes.io/projected/531feef3-54a8-4a76-b87f-4fe76d0c7e46-kube-api-access-fb2m7\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.824537 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4d9d676f-t7pss" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.836995 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9d676f-t7pss" event={"ID":"531feef3-54a8-4a76-b87f-4fe76d0c7e46","Type":"ContainerDied","Data":"b014f8b9c9cf5c6a59765e0b37799814280f0732bf339f2a1858b8c08b4c18d8"} Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.839733 4860 scope.go:117] "RemoveContainer" containerID="c1523cd43cc6372f8bf1d0026cfba9e4a0296bb3f3015fadf98dba296e94fbbc" Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.902481 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b4d9d676f-t7pss"] Oct 14 15:10:50 crc kubenswrapper[4860]: I1014 15:10:50.915306 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b4d9d676f-t7pss"] Oct 14 15:10:51 crc kubenswrapper[4860]: I1014 15:10:51.078794 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" path="/var/lib/kubelet/pods/531feef3-54a8-4a76-b87f-4fe76d0c7e46/volumes" Oct 14 15:10:51 crc kubenswrapper[4860]: E1014 15:10:51.693084 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 14 15:10:51 crc kubenswrapper[4860]: E1014 15:10:51.693280 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qv95m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b84e0757-6fba-44cd-a37d-0e7c06eab0e4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 15:10:51 crc kubenswrapper[4860]: E1014 15:10:51.694416 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" Oct 14 15:10:51 crc kubenswrapper[4860]: I1014 15:10:51.834352 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-central-agent" containerID="cri-o://4062f398bce61d7246ad507365f65e675b877bd7bde754da04411c7405d59083" gracePeriod=30 Oct 14 15:10:51 crc kubenswrapper[4860]: I1014 15:10:51.834858 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-notification-agent" containerID="cri-o://2001b635f398fb96fa37e401970bac857fd553d03526ea4c5f1036a1eda74ec2" gracePeriod=30 Oct 14 15:10:51 crc kubenswrapper[4860]: I1014 15:10:51.839152 4860 scope.go:117] "RemoveContainer" containerID="62c06d70cda9431e2f03e4c3f2a0b5a526287c6d550a4f0943cba87bebdf50c8" Oct 14 15:10:52 crc kubenswrapper[4860]: I1014 15:10:52.855648 4860 generic.go:334] "Generic (PLEG): container finished" podID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerID="2001b635f398fb96fa37e401970bac857fd553d03526ea4c5f1036a1eda74ec2" exitCode=0 Oct 14 15:10:52 crc kubenswrapper[4860]: I1014 15:10:52.855709 4860 generic.go:334] "Generic (PLEG): container finished" podID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerID="4062f398bce61d7246ad507365f65e675b877bd7bde754da04411c7405d59083" exitCode=0 Oct 14 15:10:52 crc kubenswrapper[4860]: I1014 15:10:52.855835 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b84e0757-6fba-44cd-a37d-0e7c06eab0e4","Type":"ContainerDied","Data":"2001b635f398fb96fa37e401970bac857fd553d03526ea4c5f1036a1eda74ec2"} Oct 14 15:10:52 crc kubenswrapper[4860]: I1014 15:10:52.855891 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b84e0757-6fba-44cd-a37d-0e7c06eab0e4","Type":"ContainerDied","Data":"4062f398bce61d7246ad507365f65e675b877bd7bde754da04411c7405d59083"} Oct 14 15:10:52 crc kubenswrapper[4860]: I1014 15:10:52.965637 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.082667 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-combined-ca-bundle\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.082750 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-config-data\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.082792 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv95m\" (UniqueName: \"kubernetes.io/projected/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-kube-api-access-qv95m\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.082853 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-run-httpd\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.082947 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-scripts\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.082995 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-sg-core-conf-yaml\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.083066 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-log-httpd\") pod \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\" (UID: \"b84e0757-6fba-44cd-a37d-0e7c06eab0e4\") " Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.083525 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.084194 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.106257 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-scripts" (OuterVolumeSpecName: "scripts") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.106375 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-kube-api-access-qv95m" (OuterVolumeSpecName: "kube-api-access-qv95m") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "kube-api-access-qv95m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.120135 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.137830 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.180690 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-config-data" (OuterVolumeSpecName: "config-data") pod "b84e0757-6fba-44cd-a37d-0e7c06eab0e4" (UID: "b84e0757-6fba-44cd-a37d-0e7c06eab0e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.185931 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.185978 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.185992 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.186002 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv95m\" (UniqueName: \"kubernetes.io/projected/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-kube-api-access-qv95m\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.186022 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.186057 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.186068 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b84e0757-6fba-44cd-a37d-0e7c06eab0e4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.907584 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b84e0757-6fba-44cd-a37d-0e7c06eab0e4","Type":"ContainerDied","Data":"bc04e7d8a088bdfc6fef0b96e20eba16d949562021fc648ade4495125c166315"} Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.907627 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.907658 4860 scope.go:117] "RemoveContainer" containerID="2001b635f398fb96fa37e401970bac857fd553d03526ea4c5f1036a1eda74ec2" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.910215 4860 generic.go:334] "Generic (PLEG): container finished" podID="f0a3bc02-1357-4751-9496-a41526515867" containerID="5b81caa0fbe5a103584a9a6706463d60e8f0c80f69e219815ce3c43e0ccf8981" exitCode=0 Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.910253 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2247" event={"ID":"f0a3bc02-1357-4751-9496-a41526515867","Type":"ContainerDied","Data":"5b81caa0fbe5a103584a9a6706463d60e8f0c80f69e219815ce3c43e0ccf8981"} Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.927515 4860 scope.go:117] "RemoveContainer" containerID="4062f398bce61d7246ad507365f65e675b877bd7bde754da04411c7405d59083" Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.989711 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:10:53 crc kubenswrapper[4860]: I1014 15:10:53.998253 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.039531 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:10:54 crc kubenswrapper[4860]: E1014 15:10:54.040107 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-central-agent" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040182 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-central-agent" Oct 14 15:10:54 crc kubenswrapper[4860]: E1014 15:10:54.040272 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040352 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon" Oct 14 15:10:54 crc kubenswrapper[4860]: E1014 15:10:54.040426 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon-log" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040488 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon-log" Oct 14 15:10:54 crc kubenswrapper[4860]: E1014 15:10:54.040564 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-notification-agent" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040614 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-notification-agent" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040812 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-central-agent" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040896 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.040972 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="531feef3-54a8-4a76-b87f-4fe76d0c7e46" containerName="horizon-log" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.041090 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" containerName="ceilometer-notification-agent" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.043080 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.047383 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.047783 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.056500 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206003 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-config-data\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206441 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206498 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206616 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-log-httpd\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206650 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-run-httpd\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206749 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dsx\" (UniqueName: \"kubernetes.io/projected/7ec30a67-6982-40fb-9bf5-8134cefa0429-kube-api-access-k2dsx\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.206820 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-scripts\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308577 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-scripts\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308682 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-config-data\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308763 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308809 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308876 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-log-httpd\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308894 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-run-httpd\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.308954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dsx\" (UniqueName: \"kubernetes.io/projected/7ec30a67-6982-40fb-9bf5-8134cefa0429-kube-api-access-k2dsx\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.312842 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-run-httpd\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.313096 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-log-httpd\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.317210 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.317519 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.317800 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-scripts\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.333986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-config-data\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.337439 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dsx\" (UniqueName: \"kubernetes.io/projected/7ec30a67-6982-40fb-9bf5-8134cefa0429-kube-api-access-k2dsx\") pod \"ceilometer-0\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.367708 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.829150 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:10:54 crc kubenswrapper[4860]: W1014 15:10:54.833980 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec30a67_6982_40fb_9bf5_8134cefa0429.slice/crio-441bf870241d41510ca1d82358cc84a8b5fab76bfcb8ffc512ca87c64ef61175 WatchSource:0}: Error finding container 441bf870241d41510ca1d82358cc84a8b5fab76bfcb8ffc512ca87c64ef61175: Status 404 returned error can't find the container with id 441bf870241d41510ca1d82358cc84a8b5fab76bfcb8ffc512ca87c64ef61175 Oct 14 15:10:54 crc kubenswrapper[4860]: I1014 15:10:54.917734 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerStarted","Data":"441bf870241d41510ca1d82358cc84a8b5fab76bfcb8ffc512ca87c64ef61175"} Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.076781 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b84e0757-6fba-44cd-a37d-0e7c06eab0e4" path="/var/lib/kubelet/pods/b84e0757-6fba-44cd-a37d-0e7c06eab0e4/volumes" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.292754 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2247" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.440009 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rf9\" (UniqueName: \"kubernetes.io/projected/f0a3bc02-1357-4751-9496-a41526515867-kube-api-access-79rf9\") pod \"f0a3bc02-1357-4751-9496-a41526515867\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.440364 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-db-sync-config-data\") pod \"f0a3bc02-1357-4751-9496-a41526515867\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.440435 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-combined-ca-bundle\") pod \"f0a3bc02-1357-4751-9496-a41526515867\" (UID: \"f0a3bc02-1357-4751-9496-a41526515867\") " Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.449387 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a3bc02-1357-4751-9496-a41526515867-kube-api-access-79rf9" (OuterVolumeSpecName: "kube-api-access-79rf9") pod "f0a3bc02-1357-4751-9496-a41526515867" (UID: "f0a3bc02-1357-4751-9496-a41526515867"). InnerVolumeSpecName "kube-api-access-79rf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.461479 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f0a3bc02-1357-4751-9496-a41526515867" (UID: "f0a3bc02-1357-4751-9496-a41526515867"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.475937 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0a3bc02-1357-4751-9496-a41526515867" (UID: "f0a3bc02-1357-4751-9496-a41526515867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.545630 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.545678 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a3bc02-1357-4751-9496-a41526515867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.545688 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79rf9\" (UniqueName: \"kubernetes.io/projected/f0a3bc02-1357-4751-9496-a41526515867-kube-api-access-79rf9\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.928471 4860 generic.go:334] "Generic (PLEG): container finished" podID="ca080412-b618-4293-a06d-e0d9a774d36b" containerID="eeeea6721fbd5ee86c2edf8027ffeb6a868461bddeb38914b49777fdd6c5c4dd" exitCode=0 Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.928552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grpb9" event={"ID":"ca080412-b618-4293-a06d-e0d9a774d36b","Type":"ContainerDied","Data":"eeeea6721fbd5ee86c2edf8027ffeb6a868461bddeb38914b49777fdd6c5c4dd"} Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.930297 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerStarted","Data":"83fde1cf132eac68a5b730764f8a2070291cd29748729b6d1e687b561ff235b6"} Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.931992 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2247" event={"ID":"f0a3bc02-1357-4751-9496-a41526515867","Type":"ContainerDied","Data":"b6f0871a5cf1723397d56635c1f9b2b4683c642398ce15541c40fba9d5cb7e39"} Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.932047 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f0871a5cf1723397d56635c1f9b2b4683c642398ce15541c40fba9d5cb7e39" Oct 14 15:10:55 crc kubenswrapper[4860]: I1014 15:10:55.932073 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2247" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.226157 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-98dc5ccc5-l88l9"] Oct 14 15:10:56 crc kubenswrapper[4860]: E1014 15:10:56.226820 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a3bc02-1357-4751-9496-a41526515867" containerName="barbican-db-sync" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.226838 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a3bc02-1357-4751-9496-a41526515867" containerName="barbican-db-sync" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.227017 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a3bc02-1357-4751-9496-a41526515867" containerName="barbican-db-sync" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.227886 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.241709 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.241924 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8h6vr" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.242193 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.361626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef6678e8-7116-4dc1-a7cd-420317d521eb-logs\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.361689 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-config-data\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.361770 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-config-data-custom\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.361816 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-combined-ca-bundle\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.361849 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcwv\" (UniqueName: \"kubernetes.io/projected/ef6678e8-7116-4dc1-a7cd-420317d521eb-kube-api-access-9qcwv\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.422757 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-98dc5ccc5-l88l9"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.475087 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8647888b98-65v2r"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.476811 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-logs\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-combined-ca-bundle\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484428 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-config-data-custom\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484468 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-config-data-custom\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484513 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-combined-ca-bundle\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484537 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-config-data\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcwv\" (UniqueName: \"kubernetes.io/projected/ef6678e8-7116-4dc1-a7cd-420317d521eb-kube-api-access-9qcwv\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484629 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef6678e8-7116-4dc1-a7cd-420317d521eb-logs\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-config-data\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.484739 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ggk\" (UniqueName: \"kubernetes.io/projected/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-kube-api-access-r6ggk\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.486718 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.488994 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef6678e8-7116-4dc1-a7cd-420317d521eb-logs\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.497659 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-combined-ca-bundle\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.503204 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-config-data\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.522082 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8647888b98-65v2r"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.535950 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmvhn"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.537431 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.543590 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef6678e8-7116-4dc1-a7cd-420317d521eb-config-data-custom\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.553950 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcwv\" (UniqueName: \"kubernetes.io/projected/ef6678e8-7116-4dc1-a7cd-420317d521eb-kube-api-access-9qcwv\") pod \"barbican-worker-98dc5ccc5-l88l9\" (UID: \"ef6678e8-7116-4dc1-a7cd-420317d521eb\") " pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.564735 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmvhn"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.586617 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-logs\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.586692 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-combined-ca-bundle\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.586737 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-config-data-custom\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.586781 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-config-data\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.586878 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ggk\" (UniqueName: \"kubernetes.io/projected/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-kube-api-access-r6ggk\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.588376 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-logs\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.592389 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-combined-ca-bundle\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.602051 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-config-data\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.614680 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-config-data-custom\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.622627 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ggk\" (UniqueName: \"kubernetes.io/projected/ff1ff7d7-b307-4f43-a76a-09da21f5fd05-kube-api-access-r6ggk\") pod \"barbican-keystone-listener-8647888b98-65v2r\" (UID: \"ff1ff7d7-b307-4f43-a76a-09da21f5fd05\") " pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.627766 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-98dc5ccc5-l88l9" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.645590 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b5bb84d98-q657q"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.665643 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.680127 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5bb84d98-q657q"] Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.680720 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.681049 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8647888b98-65v2r" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.687742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-config\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.687794 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-svc\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.687846 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.687867 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.687904 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.687919 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcph9\" (UniqueName: \"kubernetes.io/projected/4a30a15c-7b22-4211-ac2a-a765f21cf967-kube-api-access-rcph9\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.793991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794048 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794086 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9614a412-49d1-4a0c-8eef-ef10eb7cee37-logs\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794162 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcph9\" (UniqueName: \"kubernetes.io/projected/4a30a15c-7b22-4211-ac2a-a765f21cf967-kube-api-access-rcph9\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-combined-ca-bundle\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794252 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794348 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-config\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794376 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wwn\" (UniqueName: \"kubernetes.io/projected/9614a412-49d1-4a0c-8eef-ef10eb7cee37-kube-api-access-z4wwn\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-svc\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.794474 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data-custom\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.795395 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.796059 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.796419 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-config\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.796767 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.797219 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-svc\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.833822 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcph9\" (UniqueName: \"kubernetes.io/projected/4a30a15c-7b22-4211-ac2a-a765f21cf967-kube-api-access-rcph9\") pod \"dnsmasq-dns-85ff748b95-tmvhn\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.899672 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-combined-ca-bundle\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.899717 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.901970 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wwn\" (UniqueName: \"kubernetes.io/projected/9614a412-49d1-4a0c-8eef-ef10eb7cee37-kube-api-access-z4wwn\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.902858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data-custom\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.903279 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9614a412-49d1-4a0c-8eef-ef10eb7cee37-logs\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.903555 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9614a412-49d1-4a0c-8eef-ef10eb7cee37-logs\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.905730 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.910795 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data-custom\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.911311 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-combined-ca-bundle\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.948021 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wwn\" (UniqueName: \"kubernetes.io/projected/9614a412-49d1-4a0c-8eef-ef10eb7cee37-kube-api-access-z4wwn\") pod \"barbican-api-7b5bb84d98-q657q\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:56 crc kubenswrapper[4860]: I1014 15:10:56.995465 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.023899 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.394504 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-98dc5ccc5-l88l9"] Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.545078 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8647888b98-65v2r"] Oct 14 15:10:57 crc kubenswrapper[4860]: W1014 15:10:57.548194 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1ff7d7_b307_4f43_a76a_09da21f5fd05.slice/crio-c04fdec114339994ef0c0701809ddb6d464fc2882b6acdd4a780784999e372f1 WatchSource:0}: Error finding container c04fdec114339994ef0c0701809ddb6d464fc2882b6acdd4a780784999e372f1: Status 404 returned error can't find the container with id c04fdec114339994ef0c0701809ddb6d464fc2882b6acdd4a780784999e372f1 Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.653721 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grpb9" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.722611 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-db-sync-config-data\") pod \"ca080412-b618-4293-a06d-e0d9a774d36b\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.722659 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-combined-ca-bundle\") pod \"ca080412-b618-4293-a06d-e0d9a774d36b\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.722681 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-scripts\") pod \"ca080412-b618-4293-a06d-e0d9a774d36b\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.722728 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-config-data\") pod \"ca080412-b618-4293-a06d-e0d9a774d36b\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.722791 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfcq2\" (UniqueName: \"kubernetes.io/projected/ca080412-b618-4293-a06d-e0d9a774d36b-kube-api-access-rfcq2\") pod \"ca080412-b618-4293-a06d-e0d9a774d36b\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.722856 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca080412-b618-4293-a06d-e0d9a774d36b-etc-machine-id\") pod \"ca080412-b618-4293-a06d-e0d9a774d36b\" (UID: \"ca080412-b618-4293-a06d-e0d9a774d36b\") " Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.723142 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca080412-b618-4293-a06d-e0d9a774d36b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ca080412-b618-4293-a06d-e0d9a774d36b" (UID: "ca080412-b618-4293-a06d-e0d9a774d36b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.736351 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca080412-b618-4293-a06d-e0d9a774d36b-kube-api-access-rfcq2" (OuterVolumeSpecName: "kube-api-access-rfcq2") pod "ca080412-b618-4293-a06d-e0d9a774d36b" (UID: "ca080412-b618-4293-a06d-e0d9a774d36b"). InnerVolumeSpecName "kube-api-access-rfcq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.738184 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-scripts" (OuterVolumeSpecName: "scripts") pod "ca080412-b618-4293-a06d-e0d9a774d36b" (UID: "ca080412-b618-4293-a06d-e0d9a774d36b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.738398 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ca080412-b618-4293-a06d-e0d9a774d36b" (UID: "ca080412-b618-4293-a06d-e0d9a774d36b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.749458 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca080412-b618-4293-a06d-e0d9a774d36b" (UID: "ca080412-b618-4293-a06d-e0d9a774d36b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:57 crc kubenswrapper[4860]: W1014 15:10:57.783364 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9614a412_49d1_4a0c_8eef_ef10eb7cee37.slice/crio-ffa3d5fb42a714b32a0d79cbc227120c4f52951bef0809d6f44ba89d5714164d WatchSource:0}: Error finding container ffa3d5fb42a714b32a0d79cbc227120c4f52951bef0809d6f44ba89d5714164d: Status 404 returned error can't find the container with id ffa3d5fb42a714b32a0d79cbc227120c4f52951bef0809d6f44ba89d5714164d Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.787090 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5bb84d98-q657q"] Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.788906 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-config-data" (OuterVolumeSpecName: "config-data") pod "ca080412-b618-4293-a06d-e0d9a774d36b" (UID: "ca080412-b618-4293-a06d-e0d9a774d36b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.821924 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmvhn"] Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.824672 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.824705 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfcq2\" (UniqueName: \"kubernetes.io/projected/ca080412-b618-4293-a06d-e0d9a774d36b-kube-api-access-rfcq2\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.824718 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca080412-b618-4293-a06d-e0d9a774d36b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.824729 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.824739 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.824751 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca080412-b618-4293-a06d-e0d9a774d36b-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:10:57 crc kubenswrapper[4860]: W1014 15:10:57.834285 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a30a15c_7b22_4211_ac2a_a765f21cf967.slice/crio-7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4 WatchSource:0}: Error finding container 7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4: Status 404 returned error can't find the container with id 7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4 Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.986592 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5bb84d98-q657q" event={"ID":"9614a412-49d1-4a0c-8eef-ef10eb7cee37","Type":"ContainerStarted","Data":"ffa3d5fb42a714b32a0d79cbc227120c4f52951bef0809d6f44ba89d5714164d"} Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.991126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grpb9" event={"ID":"ca080412-b618-4293-a06d-e0d9a774d36b","Type":"ContainerDied","Data":"589a40560d36b475c5283dc2e77e3f33ae4b569301e96636cd6c2e1c2a9458fc"} Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.991163 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589a40560d36b475c5283dc2e77e3f33ae4b569301e96636cd6c2e1c2a9458fc" Oct 14 15:10:57 crc kubenswrapper[4860]: I1014 15:10:57.991237 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grpb9" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.002249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98dc5ccc5-l88l9" event={"ID":"ef6678e8-7116-4dc1-a7cd-420317d521eb","Type":"ContainerStarted","Data":"104c47dadc1858c678900954db564e1d0fd1ef0a1bc40d6512a88e2a35373c82"} Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.004831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerStarted","Data":"19d5cb8e102b8d7a88cda6ae06c6ea4755c7c11e9a7ab8885104116f4cc651ad"} Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.018437 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8647888b98-65v2r" event={"ID":"ff1ff7d7-b307-4f43-a76a-09da21f5fd05","Type":"ContainerStarted","Data":"c04fdec114339994ef0c0701809ddb6d464fc2882b6acdd4a780784999e372f1"} Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.035377 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" event={"ID":"4a30a15c-7b22-4211-ac2a-a765f21cf967","Type":"ContainerStarted","Data":"7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4"} Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.294956 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:10:58 crc kubenswrapper[4860]: E1014 15:10:58.295311 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca080412-b618-4293-a06d-e0d9a774d36b" containerName="cinder-db-sync" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.295323 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca080412-b618-4293-a06d-e0d9a774d36b" containerName="cinder-db-sync" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.295492 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca080412-b618-4293-a06d-e0d9a774d36b" containerName="cinder-db-sync" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.296379 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.301633 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.303984 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.304184 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hvpvl" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.304964 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.354846 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.366939 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.366980 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.367006 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.367065 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-scripts\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.367155 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.367197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln8j\" (UniqueName: \"kubernetes.io/projected/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-kube-api-access-wln8j\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.506733 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmvhn"] Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.513949 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.513986 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.514016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.514118 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-scripts\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.531981 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.532119 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wln8j\" (UniqueName: \"kubernetes.io/projected/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-kube-api-access-wln8j\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.544877 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.548592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.549147 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.561561 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-scripts\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.598606 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.619824 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wln8j\" (UniqueName: \"kubernetes.io/projected/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-kube-api-access-wln8j\") pod \"cinder-scheduler-0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.620039 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vcwrc"] Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.671611 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.685352 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vcwrc"] Oct 14 15:10:58 crc kubenswrapper[4860]: E1014 15:10:58.742536 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a30a15c_7b22_4211_ac2a_a765f21cf967.slice/crio-305b1882164e0e96106e6f2d255818b728ac94a42f1cdcbd94baa7bbf7ad054b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a30a15c_7b22_4211_ac2a_a765f21cf967.slice/crio-conmon-305b1882164e0e96106e6f2d255818b728ac94a42f1cdcbd94baa7bbf7ad054b.scope\": RecentStats: unable to find data in memory cache]" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.762074 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.762120 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.762154 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-config\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.762184 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkhs\" (UniqueName: \"kubernetes.io/projected/b0a64287-efcb-40a1-a986-7554e896bf83-kube-api-access-mwkhs\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.762227 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.762280 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.810094 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.811853 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.817265 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.817880 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.823030 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.866001 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.866376 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.866619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.866743 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.866858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-config\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.867000 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwkhs\" (UniqueName: \"kubernetes.io/projected/b0a64287-efcb-40a1-a986-7554e896bf83-kube-api-access-mwkhs\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.869752 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-config\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.869805 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.875896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.876371 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.879749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.922359 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwkhs\" (UniqueName: \"kubernetes.io/projected/b0a64287-efcb-40a1-a986-7554e896bf83-kube-api-access-mwkhs\") pod \"dnsmasq-dns-5c9776ccc5-vcwrc\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971341 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971410 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtqtp\" (UniqueName: \"kubernetes.io/projected/3753347d-967a-4f1d-afd2-b028a356ff60-kube-api-access-vtqtp\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3753347d-967a-4f1d-afd2-b028a356ff60-logs\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971525 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-scripts\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971583 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971602 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3753347d-967a-4f1d-afd2-b028a356ff60-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:58 crc kubenswrapper[4860]: I1014 15:10:58.971655 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data-custom\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.032299 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073517 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073573 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3753347d-967a-4f1d-afd2-b028a356ff60-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073616 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data-custom\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073639 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073667 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtqtp\" (UniqueName: \"kubernetes.io/projected/3753347d-967a-4f1d-afd2-b028a356ff60-kube-api-access-vtqtp\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073733 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3753347d-967a-4f1d-afd2-b028a356ff60-logs\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.073757 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-scripts\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.074461 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3753347d-967a-4f1d-afd2-b028a356ff60-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.075207 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3753347d-967a-4f1d-afd2-b028a356ff60-logs\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.077807 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.078693 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.082440 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-scripts\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.097272 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5bb84d98-q657q" event={"ID":"9614a412-49d1-4a0c-8eef-ef10eb7cee37","Type":"ContainerStarted","Data":"c32a404a5bb120548b2b54ea106f642c89c5eb175f2df60ed177c80b5c464560"} Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.097311 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5bb84d98-q657q" event={"ID":"9614a412-49d1-4a0c-8eef-ef10eb7cee37","Type":"ContainerStarted","Data":"de0b07109c5d77d7eb5c07dae77af62f5d4537817245cb4ac2bfe3938e3f33a6"} Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.097327 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.097351 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.101962 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerStarted","Data":"d0ea7d391711211df307804b005a4388d333cc911c962623fe7860ce86987d91"} Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.102976 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.118298 4860 generic.go:334] "Generic (PLEG): container finished" podID="4a30a15c-7b22-4211-ac2a-a765f21cf967" containerID="305b1882164e0e96106e6f2d255818b728ac94a42f1cdcbd94baa7bbf7ad054b" exitCode=0 Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.118556 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" event={"ID":"4a30a15c-7b22-4211-ac2a-a765f21cf967","Type":"ContainerDied","Data":"305b1882164e0e96106e6f2d255818b728ac94a42f1cdcbd94baa7bbf7ad054b"} Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.123622 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtqtp\" (UniqueName: \"kubernetes.io/projected/3753347d-967a-4f1d-afd2-b028a356ff60-kube-api-access-vtqtp\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.125723 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data-custom\") pod \"cinder-api-0\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.195583 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.286822 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b5bb84d98-q657q" podStartSLOduration=3.286801826 podStartE2EDuration="3.286801826s" podCreationTimestamp="2025-10-14 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:10:59.238154912 +0000 UTC m=+1320.824938371" watchObservedRunningTime="2025-10-14 15:10:59.286801826 +0000 UTC m=+1320.873585275" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.581961 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:10:59 crc kubenswrapper[4860]: E1014 15:10:59.872766 4860 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 14 15:10:59 crc kubenswrapper[4860]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/4a30a15c-7b22-4211-ac2a-a765f21cf967/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 15:10:59 crc kubenswrapper[4860]: > podSandboxID="7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4" Oct 14 15:10:59 crc kubenswrapper[4860]: E1014 15:10:59.873233 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 14 15:10:59 crc kubenswrapper[4860]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcph9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85ff748b95-tmvhn_openstack(4a30a15c-7b22-4211-ac2a-a765f21cf967): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/4a30a15c-7b22-4211-ac2a-a765f21cf967/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 15:10:59 crc kubenswrapper[4860]: > logger="UnhandledError" Oct 14 15:10:59 crc kubenswrapper[4860]: E1014 15:10:59.874551 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/4a30a15c-7b22-4211-ac2a-a765f21cf967/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" podUID="4a30a15c-7b22-4211-ac2a-a765f21cf967" Oct 14 15:10:59 crc kubenswrapper[4860]: I1014 15:10:59.913995 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vcwrc"] Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.190962 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" event={"ID":"b0a64287-efcb-40a1-a986-7554e896bf83","Type":"ContainerStarted","Data":"4970cb2f29a4f6f2bbb0ab0f9df4f62deda3b385ef30bd8b49d4b8552fa7e8dd"} Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.196725 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.201681 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0","Type":"ContainerStarted","Data":"00d5131898821d14b7841b9bc97cea1869c908df765de598ac73a9bc1f1a44d4"} Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.780033 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.820579 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-swift-storage-0\") pod \"4a30a15c-7b22-4211-ac2a-a765f21cf967\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.820660 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-svc\") pod \"4a30a15c-7b22-4211-ac2a-a765f21cf967\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.820748 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcph9\" (UniqueName: \"kubernetes.io/projected/4a30a15c-7b22-4211-ac2a-a765f21cf967-kube-api-access-rcph9\") pod \"4a30a15c-7b22-4211-ac2a-a765f21cf967\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.820879 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-config\") pod \"4a30a15c-7b22-4211-ac2a-a765f21cf967\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.820926 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-nb\") pod \"4a30a15c-7b22-4211-ac2a-a765f21cf967\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.821001 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-sb\") pod \"4a30a15c-7b22-4211-ac2a-a765f21cf967\" (UID: \"4a30a15c-7b22-4211-ac2a-a765f21cf967\") " Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.852644 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a30a15c-7b22-4211-ac2a-a765f21cf967-kube-api-access-rcph9" (OuterVolumeSpecName: "kube-api-access-rcph9") pod "4a30a15c-7b22-4211-ac2a-a765f21cf967" (UID: "4a30a15c-7b22-4211-ac2a-a765f21cf967"). InnerVolumeSpecName "kube-api-access-rcph9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.885774 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-config" (OuterVolumeSpecName: "config") pod "4a30a15c-7b22-4211-ac2a-a765f21cf967" (UID: "4a30a15c-7b22-4211-ac2a-a765f21cf967"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.912443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a30a15c-7b22-4211-ac2a-a765f21cf967" (UID: "4a30a15c-7b22-4211-ac2a-a765f21cf967"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.924040 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a30a15c-7b22-4211-ac2a-a765f21cf967" (UID: "4a30a15c-7b22-4211-ac2a-a765f21cf967"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.929894 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.930022 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcph9\" (UniqueName: \"kubernetes.io/projected/4a30a15c-7b22-4211-ac2a-a765f21cf967-kube-api-access-rcph9\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.930142 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.930237 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.947636 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a30a15c-7b22-4211-ac2a-a765f21cf967" (UID: "4a30a15c-7b22-4211-ac2a-a765f21cf967"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:00 crc kubenswrapper[4860]: I1014 15:11:00.949229 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a30a15c-7b22-4211-ac2a-a765f21cf967" (UID: "4a30a15c-7b22-4211-ac2a-a765f21cf967"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.032517 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.032562 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a30a15c-7b22-4211-ac2a-a765f21cf967-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.250417 4860 generic.go:334] "Generic (PLEG): container finished" podID="b0a64287-efcb-40a1-a986-7554e896bf83" containerID="a71397b13a9e603000cf02db5c94e223e11537b20dbe95d0b02220bdcec6e23f" exitCode=0 Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.250498 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" event={"ID":"b0a64287-efcb-40a1-a986-7554e896bf83","Type":"ContainerDied","Data":"a71397b13a9e603000cf02db5c94e223e11537b20dbe95d0b02220bdcec6e23f"} Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.263476 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3753347d-967a-4f1d-afd2-b028a356ff60","Type":"ContainerStarted","Data":"1b70485682cee01565d1f279517f019d793039080e74bd755ce3af12ce759711"} Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.276750 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" event={"ID":"4a30a15c-7b22-4211-ac2a-a765f21cf967","Type":"ContainerDied","Data":"7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4"} Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.276807 4860 scope.go:117] "RemoveContainer" containerID="305b1882164e0e96106e6f2d255818b728ac94a42f1cdcbd94baa7bbf7ad054b" Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.276956 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmvhn" Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.324462 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmvhn"] Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.373998 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmvhn"] Oct 14 15:11:01 crc kubenswrapper[4860]: I1014 15:11:01.670149 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:02 crc kubenswrapper[4860]: I1014 15:11:02.326362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3753347d-967a-4f1d-afd2-b028a356ff60","Type":"ContainerStarted","Data":"2f07f5e27542f3316566d054379dc958bbe6c97339b6bf19e8cb2419a843fb87"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.111143 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a30a15c-7b22-4211-ac2a-a765f21cf967" path="/var/lib/kubelet/pods/4a30a15c-7b22-4211-ac2a-a765f21cf967/volumes" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.393505 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" event={"ID":"b0a64287-efcb-40a1-a986-7554e896bf83","Type":"ContainerStarted","Data":"723a35917212ce2f3c98a48b4513ea817cf3a3243e7e1b4038a688090840d044"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.395169 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.409531 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0","Type":"ContainerStarted","Data":"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.418508 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8647888b98-65v2r" event={"ID":"ff1ff7d7-b307-4f43-a76a-09da21f5fd05","Type":"ContainerStarted","Data":"7c953f617883e3c8b847763fc951c66061526666ea9b36e4fe0d076691ca40fe"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.418552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8647888b98-65v2r" event={"ID":"ff1ff7d7-b307-4f43-a76a-09da21f5fd05","Type":"ContainerStarted","Data":"6ac6eb90671105898ee3736dba457c5ed197c8846698a38a30d8689c40a9ef98"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.436899 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" podStartSLOduration=5.436874178 podStartE2EDuration="5.436874178s" podCreationTimestamp="2025-10-14 15:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:11:03.428980066 +0000 UTC m=+1325.015763515" watchObservedRunningTime="2025-10-14 15:11:03.436874178 +0000 UTC m=+1325.023657627" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.442310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3753347d-967a-4f1d-afd2-b028a356ff60","Type":"ContainerStarted","Data":"d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.447242 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98dc5ccc5-l88l9" event={"ID":"ef6678e8-7116-4dc1-a7cd-420317d521eb","Type":"ContainerStarted","Data":"cd1cf81330f69c0fc81c325d5951e1fd6a229d98c51acd7d3c49d30d9cdbea21"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.447281 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98dc5ccc5-l88l9" event={"ID":"ef6678e8-7116-4dc1-a7cd-420317d521eb","Type":"ContainerStarted","Data":"109e2c6084966ed99770bc522a54d31b6b22e1a983c82c7b33552d45d6c4243e"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.448287 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.448287 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api-log" containerID="cri-o://2f07f5e27542f3316566d054379dc958bbe6c97339b6bf19e8cb2419a843fb87" gracePeriod=30 Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.448316 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api" containerID="cri-o://d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02" gracePeriod=30 Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.459130 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerStarted","Data":"60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f"} Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.459991 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.465303 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8647888b98-65v2r" podStartSLOduration=2.986166325 podStartE2EDuration="7.46528679s" podCreationTimestamp="2025-10-14 15:10:56 +0000 UTC" firstStartedPulling="2025-10-14 15:10:57.552736535 +0000 UTC m=+1319.139519984" lastFinishedPulling="2025-10-14 15:11:02.031857 +0000 UTC m=+1323.618640449" observedRunningTime="2025-10-14 15:11:03.461551799 +0000 UTC m=+1325.048335258" watchObservedRunningTime="2025-10-14 15:11:03.46528679 +0000 UTC m=+1325.052070239" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.577191 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.294155223 podStartE2EDuration="10.577070123s" podCreationTimestamp="2025-10-14 15:10:53 +0000 UTC" firstStartedPulling="2025-10-14 15:10:54.836560625 +0000 UTC m=+1316.423344074" lastFinishedPulling="2025-10-14 15:11:02.119475525 +0000 UTC m=+1323.706258974" observedRunningTime="2025-10-14 15:11:03.507634042 +0000 UTC m=+1325.094417491" watchObservedRunningTime="2025-10-14 15:11:03.577070123 +0000 UTC m=+1325.163853572" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.651342 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-777489d894-44kqm"] Oct 14 15:11:03 crc kubenswrapper[4860]: E1014 15:11:03.651939 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a30a15c-7b22-4211-ac2a-a765f21cf967" containerName="init" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.652055 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a30a15c-7b22-4211-ac2a-a765f21cf967" containerName="init" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.652315 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a30a15c-7b22-4211-ac2a-a765f21cf967" containerName="init" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.653382 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.663448 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-98dc5ccc5-l88l9" podStartSLOduration=2.99625044 podStartE2EDuration="7.663427686s" podCreationTimestamp="2025-10-14 15:10:56 +0000 UTC" firstStartedPulling="2025-10-14 15:10:57.451215962 +0000 UTC m=+1319.037999411" lastFinishedPulling="2025-10-14 15:11:02.118393208 +0000 UTC m=+1323.705176657" observedRunningTime="2025-10-14 15:11:03.547205975 +0000 UTC m=+1325.133989434" watchObservedRunningTime="2025-10-14 15:11:03.663427686 +0000 UTC m=+1325.250211135" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.672815 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.672796984 podStartE2EDuration="5.672796984s" podCreationTimestamp="2025-10-14 15:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:11:03.60035713 +0000 UTC m=+1325.187140579" watchObservedRunningTime="2025-10-14 15:11:03.672796984 +0000 UTC m=+1325.259580433" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.676252 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.676859 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.676886 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-777489d894-44kqm"] Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808356 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-config-data-custom\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808484 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-internal-tls-certs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808574 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ps7\" (UniqueName: \"kubernetes.io/projected/c1b85a60-532b-442f-ab52-86a88e9e2400-kube-api-access-d6ps7\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808668 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-public-tls-certs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808762 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-combined-ca-bundle\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808846 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1b85a60-532b-442f-ab52-86a88e9e2400-logs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.808937 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-config-data\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910259 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-combined-ca-bundle\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1b85a60-532b-442f-ab52-86a88e9e2400-logs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910341 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-config-data\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-config-data-custom\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910483 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-internal-tls-certs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ps7\" (UniqueName: \"kubernetes.io/projected/c1b85a60-532b-442f-ab52-86a88e9e2400-kube-api-access-d6ps7\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.910535 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-public-tls-certs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.913752 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1b85a60-532b-442f-ab52-86a88e9e2400-logs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.924435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-config-data-custom\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.925062 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-config-data\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.926463 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-internal-tls-certs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.929329 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-combined-ca-bundle\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.936568 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b85a60-532b-442f-ab52-86a88e9e2400-public-tls-certs\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.941592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ps7\" (UniqueName: \"kubernetes.io/projected/c1b85a60-532b-442f-ab52-86a88e9e2400-kube-api-access-d6ps7\") pod \"barbican-api-777489d894-44kqm\" (UID: \"c1b85a60-532b-442f-ab52-86a88e9e2400\") " pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:03 crc kubenswrapper[4860]: I1014 15:11:03.987257 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:04 crc kubenswrapper[4860]: I1014 15:11:04.507899 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0","Type":"ContainerStarted","Data":"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642"} Oct 14 15:11:04 crc kubenswrapper[4860]: I1014 15:11:04.513787 4860 generic.go:334] "Generic (PLEG): container finished" podID="3753347d-967a-4f1d-afd2-b028a356ff60" containerID="2f07f5e27542f3316566d054379dc958bbe6c97339b6bf19e8cb2419a843fb87" exitCode=143 Oct 14 15:11:04 crc kubenswrapper[4860]: I1014 15:11:04.514745 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3753347d-967a-4f1d-afd2-b028a356ff60","Type":"ContainerDied","Data":"2f07f5e27542f3316566d054379dc958bbe6c97339b6bf19e8cb2419a843fb87"} Oct 14 15:11:04 crc kubenswrapper[4860]: I1014 15:11:04.671669 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-777489d894-44kqm"] Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.530152 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-777489d894-44kqm" event={"ID":"c1b85a60-532b-442f-ab52-86a88e9e2400","Type":"ContainerStarted","Data":"bab59fb3476fd8946e28fd7265e6052f3ec2824f1566442a602c688890a775ef"} Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.531361 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.531454 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.531521 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-777489d894-44kqm" event={"ID":"c1b85a60-532b-442f-ab52-86a88e9e2400","Type":"ContainerStarted","Data":"030dc64ff49154a0b0f0db4ae07f25bc06c3b9e774d575ee3307e0ef7dc0e776"} Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.531587 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-777489d894-44kqm" event={"ID":"c1b85a60-532b-442f-ab52-86a88e9e2400","Type":"ContainerStarted","Data":"5e181b1cdace7b2339e0cd9cd2eaed46cfdb50db5a64cf47e235419a8ea67038"} Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.549072 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-777489d894-44kqm" podStartSLOduration=2.5490524089999997 podStartE2EDuration="2.549052409s" podCreationTimestamp="2025-10-14 15:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:11:05.545603624 +0000 UTC m=+1327.132387073" watchObservedRunningTime="2025-10-14 15:11:05.549052409 +0000 UTC m=+1327.135835858" Oct 14 15:11:05 crc kubenswrapper[4860]: I1014 15:11:05.574504 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.138111793 podStartE2EDuration="7.574486608s" podCreationTimestamp="2025-10-14 15:10:58 +0000 UTC" firstStartedPulling="2025-10-14 15:10:59.608552143 +0000 UTC m=+1321.195335592" lastFinishedPulling="2025-10-14 15:11:02.044926958 +0000 UTC m=+1323.631710407" observedRunningTime="2025-10-14 15:11:05.569811994 +0000 UTC m=+1327.156595443" watchObservedRunningTime="2025-10-14 15:11:05.574486608 +0000 UTC m=+1327.161270057" Oct 14 15:11:07 crc kubenswrapper[4860]: I1014 15:11:07.024656 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:11:07 crc kubenswrapper[4860]: I1014 15:11:07.912461 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:11:07 crc kubenswrapper[4860]: I1014 15:11:07.947322 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64cf955b6-w5x5t" Oct 14 15:11:08 crc kubenswrapper[4860]: I1014 15:11:08.819651 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.034211 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.118624 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lnjrn"] Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.119317 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="dnsmasq-dns" containerID="cri-o://e63b0e30b1cbf1822a4f5b87bf9e67c8fb68a7a4e2bb329552e7aa90e7cc6767" gracePeriod=10 Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.526552 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.596634 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerID="e63b0e30b1cbf1822a4f5b87bf9e67c8fb68a7a4e2bb329552e7aa90e7cc6767" exitCode=0 Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.596675 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" event={"ID":"0ab412ac-4ad8-4281-a48a-50c957b45ce2","Type":"ContainerDied","Data":"e63b0e30b1cbf1822a4f5b87bf9e67c8fb68a7a4e2bb329552e7aa90e7cc6767"} Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.833916 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79ffbddbb5-96v5k" Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.852932 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.969704 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-svc\") pod \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.970183 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-config\") pod \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.970268 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxntk\" (UniqueName: \"kubernetes.io/projected/0ab412ac-4ad8-4281-a48a-50c957b45ce2-kube-api-access-dxntk\") pod \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.970325 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-sb\") pod \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.970397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-swift-storage-0\") pod \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " Oct 14 15:11:09 crc kubenswrapper[4860]: I1014 15:11:09.970434 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-nb\") pod \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\" (UID: \"0ab412ac-4ad8-4281-a48a-50c957b45ce2\") " Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:09.987214 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab412ac-4ad8-4281-a48a-50c957b45ce2-kube-api-access-dxntk" (OuterVolumeSpecName: "kube-api-access-dxntk") pod "0ab412ac-4ad8-4281-a48a-50c957b45ce2" (UID: "0ab412ac-4ad8-4281-a48a-50c957b45ce2"). InnerVolumeSpecName "kube-api-access-dxntk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.052521 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.064948 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ab412ac-4ad8-4281-a48a-50c957b45ce2" (UID: "0ab412ac-4ad8-4281-a48a-50c957b45ce2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.074860 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.074899 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxntk\" (UniqueName: \"kubernetes.io/projected/0ab412ac-4ad8-4281-a48a-50c957b45ce2-kube-api-access-dxntk\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.102557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ab412ac-4ad8-4281-a48a-50c957b45ce2" (UID: "0ab412ac-4ad8-4281-a48a-50c957b45ce2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.102791 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-config" (OuterVolumeSpecName: "config") pod "0ab412ac-4ad8-4281-a48a-50c957b45ce2" (UID: "0ab412ac-4ad8-4281-a48a-50c957b45ce2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.152640 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ab412ac-4ad8-4281-a48a-50c957b45ce2" (UID: "0ab412ac-4ad8-4281-a48a-50c957b45ce2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.177104 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.177304 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.177385 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.177600 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ab412ac-4ad8-4281-a48a-50c957b45ce2" (UID: "0ab412ac-4ad8-4281-a48a-50c957b45ce2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.279750 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab412ac-4ad8-4281-a48a-50c957b45ce2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.664387 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" event={"ID":"0ab412ac-4ad8-4281-a48a-50c957b45ce2","Type":"ContainerDied","Data":"0469b911d83e0ad96bf1d0813d483f40a8e1aee58e833aeed1e8b5f5af1f20b0"} Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.664457 4860 scope.go:117] "RemoveContainer" containerID="e63b0e30b1cbf1822a4f5b87bf9e67c8fb68a7a4e2bb329552e7aa90e7cc6767" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.664504 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.720950 4860 scope.go:117] "RemoveContainer" containerID="3500372be9c1e05fa6d553a22fbe27c475d7d7f4487714629cd8f9f464c8a821" Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.723094 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lnjrn"] Oct 14 15:11:10 crc kubenswrapper[4860]: I1014 15:11:10.733928 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lnjrn"] Oct 14 15:11:11 crc kubenswrapper[4860]: I1014 15:11:11.071000 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" path="/var/lib/kubelet/pods/0ab412ac-4ad8-4281-a48a-50c957b45ce2/volumes" Oct 14 15:11:13 crc kubenswrapper[4860]: I1014 15:11:13.018241 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 15:11:13 crc kubenswrapper[4860]: I1014 15:11:13.837426 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 15:11:13 crc kubenswrapper[4860]: I1014 15:11:13.884092 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.431792 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 15:11:14 crc kubenswrapper[4860]: E1014 15:11:14.440590 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="init" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.440671 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="init" Oct 14 15:11:14 crc kubenswrapper[4860]: E1014 15:11:14.440761 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="dnsmasq-dns" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.440823 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="dnsmasq-dns" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.441053 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="dnsmasq-dns" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.441749 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.451508 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rfwp7" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.452148 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.452315 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.459183 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.554488 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jzx\" (UniqueName: \"kubernetes.io/projected/0923e67e-dcfe-48bd-9987-c24810447a3e-kube-api-access-h4jzx\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.554552 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0923e67e-dcfe-48bd-9987-c24810447a3e-openstack-config\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.554646 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0923e67e-dcfe-48bd-9987-c24810447a3e-openstack-config-secret\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.554732 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0923e67e-dcfe-48bd-9987-c24810447a3e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.577123 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-lnjrn" podUID="0ab412ac-4ad8-4281-a48a-50c957b45ce2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.655950 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0923e67e-dcfe-48bd-9987-c24810447a3e-openstack-config\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.656093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0923e67e-dcfe-48bd-9987-c24810447a3e-openstack-config-secret\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.656158 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0923e67e-dcfe-48bd-9987-c24810447a3e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.656256 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jzx\" (UniqueName: \"kubernetes.io/projected/0923e67e-dcfe-48bd-9987-c24810447a3e-kube-api-access-h4jzx\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.656846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0923e67e-dcfe-48bd-9987-c24810447a3e-openstack-config\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.664080 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0923e67e-dcfe-48bd-9987-c24810447a3e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.674074 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0923e67e-dcfe-48bd-9987-c24810447a3e-openstack-config-secret\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.682515 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jzx\" (UniqueName: \"kubernetes.io/projected/0923e67e-dcfe-48bd-9987-c24810447a3e-kube-api-access-h4jzx\") pod \"openstackclient\" (UID: \"0923e67e-dcfe-48bd-9987-c24810447a3e\") " pod="openstack/openstackclient" Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.695679 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="cinder-scheduler" containerID="cri-o://b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48" gracePeriod=30 Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.695764 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="probe" containerID="cri-o://eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642" gracePeriod=30 Oct 14 15:11:14 crc kubenswrapper[4860]: I1014 15:11:14.783105 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 15:11:15 crc kubenswrapper[4860]: I1014 15:11:15.091774 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:11:15 crc kubenswrapper[4860]: I1014 15:11:15.646551 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 15:11:15 crc kubenswrapper[4860]: I1014 15:11:15.708574 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0923e67e-dcfe-48bd-9987-c24810447a3e","Type":"ContainerStarted","Data":"c054ba41989f688062a009d5e3cfe7d262750363e6fd3de718a3147ebf0b6ce6"} Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.042735 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76b8bb94b7-r2cx7"] Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.044227 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.054301 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.054507 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.054642 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088543 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-combined-ca-bundle\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088578 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-internal-tls-certs\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088638 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b791e9e4-1b27-429a-9811-2b956a974e3a-run-httpd\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088669 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-public-tls-certs\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088689 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b791e9e4-1b27-429a-9811-2b956a974e3a-etc-swift\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088703 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b791e9e4-1b27-429a-9811-2b956a974e3a-log-httpd\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088737 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4vkc\" (UniqueName: \"kubernetes.io/projected/b791e9e4-1b27-429a-9811-2b956a974e3a-kube-api-access-c4vkc\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.088769 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-config-data\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.147960 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76b8bb94b7-r2cx7"] Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.190807 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b791e9e4-1b27-429a-9811-2b956a974e3a-etc-swift\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.190844 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b791e9e4-1b27-429a-9811-2b956a974e3a-log-httpd\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.190887 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4vkc\" (UniqueName: \"kubernetes.io/projected/b791e9e4-1b27-429a-9811-2b956a974e3a-kube-api-access-c4vkc\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.190921 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-config-data\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.190974 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-combined-ca-bundle\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.190990 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-internal-tls-certs\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.191119 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b791e9e4-1b27-429a-9811-2b956a974e3a-run-httpd\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.191157 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-public-tls-certs\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.192157 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b791e9e4-1b27-429a-9811-2b956a974e3a-log-httpd\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.192239 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b791e9e4-1b27-429a-9811-2b956a974e3a-run-httpd\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.227391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4vkc\" (UniqueName: \"kubernetes.io/projected/b791e9e4-1b27-429a-9811-2b956a974e3a-kube-api-access-c4vkc\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.227976 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-combined-ca-bundle\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.228340 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-config-data\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.228350 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-public-tls-certs\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.228932 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b791e9e4-1b27-429a-9811-2b956a974e3a-internal-tls-certs\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.232403 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b791e9e4-1b27-429a-9811-2b956a974e3a-etc-swift\") pod \"swift-proxy-76b8bb94b7-r2cx7\" (UID: \"b791e9e4-1b27-429a-9811-2b956a974e3a\") " pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.388735 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.664925 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.712737 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data-custom\") pod \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.712776 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-etc-machine-id\") pod \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.712947 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-combined-ca-bundle\") pod \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.712978 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wln8j\" (UniqueName: \"kubernetes.io/projected/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-kube-api-access-wln8j\") pod \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.713182 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" (UID: "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.713884 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-scripts\") pod \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.713956 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data\") pod \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\" (UID: \"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0\") " Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.714408 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.727434 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-scripts" (OuterVolumeSpecName: "scripts") pod "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" (UID: "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737425 4860 generic.go:334] "Generic (PLEG): container finished" podID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerID="eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642" exitCode=0 Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737461 4860 generic.go:334] "Generic (PLEG): container finished" podID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerID="b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48" exitCode=0 Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0","Type":"ContainerDied","Data":"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642"} Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0","Type":"ContainerDied","Data":"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48"} Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737517 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0","Type":"ContainerDied","Data":"00d5131898821d14b7841b9bc97cea1869c908df765de598ac73a9bc1f1a44d4"} Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737530 4860 scope.go:117] "RemoveContainer" containerID="eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.737651 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.750563 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" (UID: "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.750615 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-kube-api-access-wln8j" (OuterVolumeSpecName: "kube-api-access-wln8j") pod "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" (UID: "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0"). InnerVolumeSpecName "kube-api-access-wln8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.820498 4860 scope.go:117] "RemoveContainer" containerID="b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.824077 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wln8j\" (UniqueName: \"kubernetes.io/projected/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-kube-api-access-wln8j\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.824225 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.824291 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.862716 4860 scope.go:117] "RemoveContainer" containerID="eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642" Oct 14 15:11:16 crc kubenswrapper[4860]: E1014 15:11:16.867308 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642\": container with ID starting with eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642 not found: ID does not exist" containerID="eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.867354 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642"} err="failed to get container status \"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642\": rpc error: code = NotFound desc = could not find container \"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642\": container with ID starting with eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642 not found: ID does not exist" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.867385 4860 scope.go:117] "RemoveContainer" containerID="b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48" Oct 14 15:11:16 crc kubenswrapper[4860]: E1014 15:11:16.867745 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48\": container with ID starting with b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48 not found: ID does not exist" containerID="b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.867786 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48"} err="failed to get container status \"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48\": rpc error: code = NotFound desc = could not find container \"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48\": container with ID starting with b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48 not found: ID does not exist" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.867818 4860 scope.go:117] "RemoveContainer" containerID="eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.868485 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642"} err="failed to get container status \"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642\": rpc error: code = NotFound desc = could not find container \"eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642\": container with ID starting with eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642 not found: ID does not exist" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.868508 4860 scope.go:117] "RemoveContainer" containerID="b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.868739 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48"} err="failed to get container status \"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48\": rpc error: code = NotFound desc = could not find container \"b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48\": container with ID starting with b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48 not found: ID does not exist" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.881183 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" (UID: "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.926182 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:16 crc kubenswrapper[4860]: I1014 15:11:16.971139 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data" (OuterVolumeSpecName: "config-data") pod "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" (UID: "0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.029393 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.136180 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.152950 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.168582 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:11:17 crc kubenswrapper[4860]: E1014 15:11:17.169003 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="cinder-scheduler" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.169020 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="cinder-scheduler" Oct 14 15:11:17 crc kubenswrapper[4860]: E1014 15:11:17.169073 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="probe" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.169079 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="probe" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.169247 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="cinder-scheduler" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.169279 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" containerName="probe" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.170166 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.174885 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.202473 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.232589 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.232704 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33f4677b-3c11-4662-9129-35805ee9cab0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.232729 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.232756 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mmm\" (UniqueName: \"kubernetes.io/projected/33f4677b-3c11-4662-9129-35805ee9cab0-kube-api-access-c6mmm\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.232781 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-scripts\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.232799 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-config-data\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.334145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33f4677b-3c11-4662-9129-35805ee9cab0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.334187 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.334217 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mmm\" (UniqueName: \"kubernetes.io/projected/33f4677b-3c11-4662-9129-35805ee9cab0-kube-api-access-c6mmm\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.334246 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-scripts\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.334270 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-config-data\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.334313 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.341930 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33f4677b-3c11-4662-9129-35805ee9cab0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.361893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.363166 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.366637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-scripts\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.371923 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f4677b-3c11-4662-9129-35805ee9cab0-config-data\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.388049 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mmm\" (UniqueName: \"kubernetes.io/projected/33f4677b-3c11-4662-9129-35805ee9cab0-kube-api-access-c6mmm\") pod \"cinder-scheduler-0\" (UID: \"33f4677b-3c11-4662-9129-35805ee9cab0\") " pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.406974 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76b8bb94b7-r2cx7"] Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.495567 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.537005 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.599297 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b4bf5b577-882p6" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.657891 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.658224 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-central-agent" containerID="cri-o://83fde1cf132eac68a5b730764f8a2070291cd29748729b6d1e687b561ff235b6" gracePeriod=30 Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.659721 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="proxy-httpd" containerID="cri-o://60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f" gracePeriod=30 Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.659849 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="sg-core" containerID="cri-o://d0ea7d391711211df307804b005a4388d333cc911c962623fe7860ce86987d91" gracePeriod=30 Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.659892 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-notification-agent" containerID="cri-o://19d5cb8e102b8d7a88cda6ae06c6ea4755c7c11e9a7ab8885104116f4cc651ad" gracePeriod=30 Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.693366 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dbc5f5f64-m4tp9"] Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.693577 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dbc5f5f64-m4tp9" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-api" containerID="cri-o://2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606" gracePeriod=30 Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.693968 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dbc5f5f64-m4tp9" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-httpd" containerID="cri-o://5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f" gracePeriod=30 Oct 14 15:11:17 crc kubenswrapper[4860]: E1014 15:11:17.694225 4860 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e0bf9aa888a1daf691d775e6458cd08ddb477115a2fd2408b1ee99ee8493989e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e0bf9aa888a1daf691d775e6458cd08ddb477115a2fd2408b1ee99ee8493989e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-55f844cf75-lnjrn_0ab412ac-4ad8-4281-a48a-50c957b45ce2/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-55f844cf75-lnjrn_0ab412ac-4ad8-4281-a48a-50c957b45ce2/dnsmasq-dns/0.log: no such file or directory Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.714352 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": EOF" Oct 14 15:11:17 crc kubenswrapper[4860]: I1014 15:11:17.789375 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" event={"ID":"b791e9e4-1b27-429a-9811-2b956a974e3a","Type":"ContainerStarted","Data":"a4fd12965d76506e7a1982682fbfdffc7381c78a7c29f8932004759c92719287"} Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.551893 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-777489d894-44kqm" Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.649251 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.679389 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b5bb84d98-q657q"] Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.679653 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b5bb84d98-q657q" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api-log" containerID="cri-o://c32a404a5bb120548b2b54ea106f642c89c5eb175f2df60ed177c80b5c464560" gracePeriod=30 Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.680196 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b5bb84d98-q657q" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api" containerID="cri-o://de0b07109c5d77d7eb5c07dae77af62f5d4537817245cb4ac2bfe3938e3f33a6" gracePeriod=30 Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.816981 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33f4677b-3c11-4662-9129-35805ee9cab0","Type":"ContainerStarted","Data":"1b450f1f9e3a0db316da83149025dc6f394672e054fdc7a5ce6629ef5ebf3cae"} Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.831332 4860 generic.go:334] "Generic (PLEG): container finished" podID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerID="d0ea7d391711211df307804b005a4388d333cc911c962623fe7860ce86987d91" exitCode=2 Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.831364 4860 generic.go:334] "Generic (PLEG): container finished" podID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerID="83fde1cf132eac68a5b730764f8a2070291cd29748729b6d1e687b561ff235b6" exitCode=0 Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.831387 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerDied","Data":"d0ea7d391711211df307804b005a4388d333cc911c962623fe7860ce86987d91"} Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.831442 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerDied","Data":"83fde1cf132eac68a5b730764f8a2070291cd29748729b6d1e687b561ff235b6"} Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.880519 4860 generic.go:334] "Generic (PLEG): container finished" podID="129a5016-7ba9-4901-abe0-9531c4129a99" containerID="5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f" exitCode=0 Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.880667 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dbc5f5f64-m4tp9" event={"ID":"129a5016-7ba9-4901-abe0-9531c4129a99","Type":"ContainerDied","Data":"5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f"} Oct 14 15:11:18 crc kubenswrapper[4860]: I1014 15:11:18.895195 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" event={"ID":"b791e9e4-1b27-429a-9811-2b956a974e3a","Type":"ContainerStarted","Data":"5e6227d74aa2f7889a81035b9a3cdca1404acb419988f0c22e65cc048ef92ca7"} Oct 14 15:11:19 crc kubenswrapper[4860]: W1014 15:11:19.114582 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a30a15c_7b22_4211_ac2a_a765f21cf967.slice/crio-6dc4cd6b69113a4373fa83ec648952dbb90b8c4cd645f6cb60b2ed07ef2384be.scope WatchSource:0}: Error finding container 6dc4cd6b69113a4373fa83ec648952dbb90b8c4cd645f6cb60b2ed07ef2384be: Status 404 returned error can't find the container with id 6dc4cd6b69113a4373fa83ec648952dbb90b8c4cd645f6cb60b2ed07ef2384be Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.117912 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0" path="/var/lib/kubelet/pods/0b323e2f-aa3b-4b88-a2a1-6f492ce0e5d0/volumes" Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.231159 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.167:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.928336 4860 generic.go:334] "Generic (PLEG): container finished" podID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerID="60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f" exitCode=0 Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.928670 4860 generic.go:334] "Generic (PLEG): container finished" podID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerID="19d5cb8e102b8d7a88cda6ae06c6ea4755c7c11e9a7ab8885104116f4cc651ad" exitCode=0 Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.928788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerDied","Data":"60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f"} Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.928820 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerDied","Data":"19d5cb8e102b8d7a88cda6ae06c6ea4755c7c11e9a7ab8885104116f4cc651ad"} Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.942866 4860 generic.go:334] "Generic (PLEG): container finished" podID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerID="77384f8c762ca369199fe7f2734dfaaa8f59ec6ad97c1602f4bac3fd00f71d13" exitCode=137 Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.942936 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8795558b4-cgsrj" event={"ID":"ba50439f-28b5-4b76-9afb-b705c4037f8d","Type":"ContainerDied","Data":"77384f8c762ca369199fe7f2734dfaaa8f59ec6ad97c1602f4bac3fd00f71d13"} Oct 14 15:11:19 crc kubenswrapper[4860]: I1014 15:11:19.942963 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8795558b4-cgsrj" event={"ID":"ba50439f-28b5-4b76-9afb-b705c4037f8d","Type":"ContainerStarted","Data":"5f8e0ad5170cd9e86eeeaa392895280deefb8c16bd3b2f58906c143518517c66"} Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.020434 4860 generic.go:334] "Generic (PLEG): container finished" podID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerID="48c829aeecd60e8eb72c1f7f8f0dd773866393ac607409fd129497c22dfd7dfc" exitCode=137 Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.020488 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerDied","Data":"48c829aeecd60e8eb72c1f7f8f0dd773866393ac607409fd129497c22dfd7dfc"} Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.020512 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerStarted","Data":"bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249"} Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.033013 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" event={"ID":"b791e9e4-1b27-429a-9811-2b956a974e3a","Type":"ContainerStarted","Data":"33945ab97c679d441b9a1a14b8881ed6447e0b1df45dab5ba0dc36fbb45ff272"} Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.033701 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.033738 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.047164 4860 generic.go:334] "Generic (PLEG): container finished" podID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerID="c32a404a5bb120548b2b54ea106f642c89c5eb175f2df60ed177c80b5c464560" exitCode=143 Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.047238 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5bb84d98-q657q" event={"ID":"9614a412-49d1-4a0c-8eef-ef10eb7cee37","Type":"ContainerDied","Data":"c32a404a5bb120548b2b54ea106f642c89c5eb175f2df60ed177c80b5c464560"} Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.134127 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" podStartSLOduration=5.134102846 podStartE2EDuration="5.134102846s" podCreationTimestamp="2025-10-14 15:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:11:20.110000239 +0000 UTC m=+1341.696783708" watchObservedRunningTime="2025-10-14 15:11:20.134102846 +0000 UTC m=+1341.720886295" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.297261 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.461639 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-combined-ca-bundle\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.462008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-log-httpd\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.462069 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-scripts\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.462095 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-config-data\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.462126 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-run-httpd\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.462231 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-sg-core-conf-yaml\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.462255 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2dsx\" (UniqueName: \"kubernetes.io/projected/7ec30a67-6982-40fb-9bf5-8134cefa0429-kube-api-access-k2dsx\") pod \"7ec30a67-6982-40fb-9bf5-8134cefa0429\" (UID: \"7ec30a67-6982-40fb-9bf5-8134cefa0429\") " Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.466733 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.467322 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.473514 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec30a67-6982-40fb-9bf5-8134cefa0429-kube-api-access-k2dsx" (OuterVolumeSpecName: "kube-api-access-k2dsx") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "kube-api-access-k2dsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.474463 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-scripts" (OuterVolumeSpecName: "scripts") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.516419 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.564239 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.564291 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2dsx\" (UniqueName: \"kubernetes.io/projected/7ec30a67-6982-40fb-9bf5-8134cefa0429-kube-api-access-k2dsx\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.564308 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.564319 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.564331 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec30a67-6982-40fb-9bf5-8134cefa0429-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.668227 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.753582 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-config-data" (OuterVolumeSpecName: "config-data") pod "7ec30a67-6982-40fb-9bf5-8134cefa0429" (UID: "7ec30a67-6982-40fb-9bf5-8134cefa0429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.769223 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:20 crc kubenswrapper[4860]: I1014 15:11:20.769295 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec30a67-6982-40fb-9bf5-8134cefa0429-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.065453 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.099940 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33f4677b-3c11-4662-9129-35805ee9cab0","Type":"ContainerStarted","Data":"4607fe44d3be4858768d92d0b9363db6b52c7f5f32c2e22b3c5616d37233eb3d"} Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.099984 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec30a67-6982-40fb-9bf5-8134cefa0429","Type":"ContainerDied","Data":"441bf870241d41510ca1d82358cc84a8b5fab76bfcb8ffc512ca87c64ef61175"} Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.100009 4860 scope.go:117] "RemoveContainer" containerID="60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f" Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.190219 4860 scope.go:117] "RemoveContainer" containerID="d0ea7d391711211df307804b005a4388d333cc911c962623fe7860ce86987d91" Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.221882 4860 scope.go:117] "RemoveContainer" containerID="19d5cb8e102b8d7a88cda6ae06c6ea4755c7c11e9a7ab8885104116f4cc651ad" Oct 14 15:11:21 crc kubenswrapper[4860]: I1014 15:11:21.256260 4860 scope.go:117] "RemoveContainer" containerID="83fde1cf132eac68a5b730764f8a2070291cd29748729b6d1e687b561ff235b6" Oct 14 15:11:22 crc kubenswrapper[4860]: I1014 15:11:22.078058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33f4677b-3c11-4662-9129-35805ee9cab0","Type":"ContainerStarted","Data":"3401086c1084908132c031f681fa4a3c087deb63b1c917185022439cba0a7cf9"} Oct 14 15:11:22 crc kubenswrapper[4860]: I1014 15:11:22.110225 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.110205192 podStartE2EDuration="5.110205192s" podCreationTimestamp="2025-10-14 15:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:11:22.103005246 +0000 UTC m=+1343.689788695" watchObservedRunningTime="2025-10-14 15:11:22.110205192 +0000 UTC m=+1343.696988641" Oct 14 15:11:22 crc kubenswrapper[4860]: I1014 15:11:22.242049 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b5bb84d98-q657q" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:41232->10.217.0.164:9311: read: connection reset by peer" Oct 14 15:11:22 crc kubenswrapper[4860]: I1014 15:11:22.242128 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b5bb84d98-q657q" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:41224->10.217.0.164:9311: read: connection reset by peer" Oct 14 15:11:22 crc kubenswrapper[4860]: W1014 15:11:22.303567 4860 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec30a67_6982_40fb_9bf5_8134cefa0429.slice/crio-60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec30a67_6982_40fb_9bf5_8134cefa0429.slice/crio-60a861f242ac69149bd602a03f3e2e0ed4c5c9029eeba6af0cbff02559a9417f.scope: no such file or directory Oct 14 15:11:22 crc kubenswrapper[4860]: E1014 15:11:22.330516 4860 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a30a15c_7b22_4211_ac2a_a765f21cf967.slice/crio-7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4: Error finding container 7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4: Status 404 returned error can't find the container with id 7c270b24a47cd4d7f766351d94dde17ed1195a8ea3c100d31a7b4b8ff93876c4 Oct 14 15:11:22 crc kubenswrapper[4860]: I1014 15:11:22.496188 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 15:11:22 crc kubenswrapper[4860]: E1014 15:11:22.768293 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a30a15c_7b22_4211_ac2a_a765f21cf967.slice/crio-conmon-6dc4cd6b69113a4373fa83ec648952dbb90b8c4cd645f6cb60b2ed07ef2384be.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-00d5131898821d14b7841b9bc97cea1869c908df765de598ac73a9bc1f1a44d4\": RecentStats: unable to find data in memory cache]" Oct 14 15:11:22 crc kubenswrapper[4860]: I1014 15:11:22.882493 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.011189 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-config\") pod \"129a5016-7ba9-4901-abe0-9531c4129a99\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.011334 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-httpd-config\") pod \"129a5016-7ba9-4901-abe0-9531c4129a99\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.011404 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-ovndb-tls-certs\") pod \"129a5016-7ba9-4901-abe0-9531c4129a99\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.011439 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-combined-ca-bundle\") pod \"129a5016-7ba9-4901-abe0-9531c4129a99\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.011457 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgwz8\" (UniqueName: \"kubernetes.io/projected/129a5016-7ba9-4901-abe0-9531c4129a99-kube-api-access-mgwz8\") pod \"129a5016-7ba9-4901-abe0-9531c4129a99\" (UID: \"129a5016-7ba9-4901-abe0-9531c4129a99\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.029258 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "129a5016-7ba9-4901-abe0-9531c4129a99" (UID: "129a5016-7ba9-4901-abe0-9531c4129a99"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.048885 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129a5016-7ba9-4901-abe0-9531c4129a99-kube-api-access-mgwz8" (OuterVolumeSpecName: "kube-api-access-mgwz8") pod "129a5016-7ba9-4901-abe0-9531c4129a99" (UID: "129a5016-7ba9-4901-abe0-9531c4129a99"). InnerVolumeSpecName "kube-api-access-mgwz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.104496 4860 generic.go:334] "Generic (PLEG): container finished" podID="129a5016-7ba9-4901-abe0-9531c4129a99" containerID="2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606" exitCode=0 Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.104849 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dbc5f5f64-m4tp9" event={"ID":"129a5016-7ba9-4901-abe0-9531c4129a99","Type":"ContainerDied","Data":"2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606"} Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.104877 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dbc5f5f64-m4tp9" event={"ID":"129a5016-7ba9-4901-abe0-9531c4129a99","Type":"ContainerDied","Data":"b8612bb4e3267f3affcaf3fdcc366045c787b17a551eba8df99313dae35da300"} Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.104896 4860 scope.go:117] "RemoveContainer" containerID="5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.104996 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dbc5f5f64-m4tp9" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.121499 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgwz8\" (UniqueName: \"kubernetes.io/projected/129a5016-7ba9-4901-abe0-9531c4129a99-kube-api-access-mgwz8\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.121537 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.125823 4860 generic.go:334] "Generic (PLEG): container finished" podID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerID="de0b07109c5d77d7eb5c07dae77af62f5d4537817245cb4ac2bfe3938e3f33a6" exitCode=0 Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.126865 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5bb84d98-q657q" event={"ID":"9614a412-49d1-4a0c-8eef-ef10eb7cee37","Type":"ContainerDied","Data":"de0b07109c5d77d7eb5c07dae77af62f5d4537817245cb4ac2bfe3938e3f33a6"} Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.130140 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.139230 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "129a5016-7ba9-4901-abe0-9531c4129a99" (UID: "129a5016-7ba9-4901-abe0-9531c4129a99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.142574 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-config" (OuterVolumeSpecName: "config") pod "129a5016-7ba9-4901-abe0-9531c4129a99" (UID: "129a5016-7ba9-4901-abe0-9531c4129a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.186588 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "129a5016-7ba9-4901-abe0-9531c4129a99" (UID: "129a5016-7ba9-4901-abe0-9531c4129a99"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.189673 4860 scope.go:117] "RemoveContainer" containerID="2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.223378 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-combined-ca-bundle\") pod \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.223489 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wwn\" (UniqueName: \"kubernetes.io/projected/9614a412-49d1-4a0c-8eef-ef10eb7cee37-kube-api-access-z4wwn\") pod \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.223533 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data-custom\") pod \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.223592 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9614a412-49d1-4a0c-8eef-ef10eb7cee37-logs\") pod \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.223633 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data\") pod \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\" (UID: \"9614a412-49d1-4a0c-8eef-ef10eb7cee37\") " Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.225153 4860 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.225173 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.225182 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/129a5016-7ba9-4901-abe0-9531c4129a99-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.227669 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9614a412-49d1-4a0c-8eef-ef10eb7cee37-logs" (OuterVolumeSpecName: "logs") pod "9614a412-49d1-4a0c-8eef-ef10eb7cee37" (UID: "9614a412-49d1-4a0c-8eef-ef10eb7cee37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.227849 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9614a412-49d1-4a0c-8eef-ef10eb7cee37" (UID: "9614a412-49d1-4a0c-8eef-ef10eb7cee37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.231751 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9614a412-49d1-4a0c-8eef-ef10eb7cee37-kube-api-access-z4wwn" (OuterVolumeSpecName: "kube-api-access-z4wwn") pod "9614a412-49d1-4a0c-8eef-ef10eb7cee37" (UID: "9614a412-49d1-4a0c-8eef-ef10eb7cee37"). InnerVolumeSpecName "kube-api-access-z4wwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.244274 4860 scope.go:117] "RemoveContainer" containerID="5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f" Oct 14 15:11:23 crc kubenswrapper[4860]: E1014 15:11:23.250113 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f\": container with ID starting with 5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f not found: ID does not exist" containerID="5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.250249 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f"} err="failed to get container status \"5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f\": rpc error: code = NotFound desc = could not find container \"5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f\": container with ID starting with 5cf3fc2a42dca552e15a4ba464b7f019ca6354fb4f64f7b13901b536223a4d5f not found: ID does not exist" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.250369 4860 scope.go:117] "RemoveContainer" containerID="2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606" Oct 14 15:11:23 crc kubenswrapper[4860]: E1014 15:11:23.254621 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606\": container with ID starting with 2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606 not found: ID does not exist" containerID="2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.254833 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606"} err="failed to get container status \"2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606\": rpc error: code = NotFound desc = could not find container \"2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606\": container with ID starting with 2d115478f3ffd0b0fa73706c376b43c60549b269d9445fc330514e5218e7d606 not found: ID does not exist" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.267693 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9614a412-49d1-4a0c-8eef-ef10eb7cee37" (UID: "9614a412-49d1-4a0c-8eef-ef10eb7cee37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.289732 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data" (OuterVolumeSpecName: "config-data") pod "9614a412-49d1-4a0c-8eef-ef10eb7cee37" (UID: "9614a412-49d1-4a0c-8eef-ef10eb7cee37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.329646 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9614a412-49d1-4a0c-8eef-ef10eb7cee37-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.329694 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.329710 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.329725 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wwn\" (UniqueName: \"kubernetes.io/projected/9614a412-49d1-4a0c-8eef-ef10eb7cee37-kube-api-access-z4wwn\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.329761 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9614a412-49d1-4a0c-8eef-ef10eb7cee37-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.447434 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dbc5f5f64-m4tp9"] Oct 14 15:11:23 crc kubenswrapper[4860]: I1014 15:11:23.461796 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5dbc5f5f64-m4tp9"] Oct 14 15:11:24 crc kubenswrapper[4860]: I1014 15:11:24.140156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5bb84d98-q657q" event={"ID":"9614a412-49d1-4a0c-8eef-ef10eb7cee37","Type":"ContainerDied","Data":"ffa3d5fb42a714b32a0d79cbc227120c4f52951bef0809d6f44ba89d5714164d"} Oct 14 15:11:24 crc kubenswrapper[4860]: I1014 15:11:24.140212 4860 scope.go:117] "RemoveContainer" containerID="de0b07109c5d77d7eb5c07dae77af62f5d4537817245cb4ac2bfe3938e3f33a6" Oct 14 15:11:24 crc kubenswrapper[4860]: I1014 15:11:24.140337 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5bb84d98-q657q" Oct 14 15:11:24 crc kubenswrapper[4860]: I1014 15:11:24.177344 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b5bb84d98-q657q"] Oct 14 15:11:24 crc kubenswrapper[4860]: I1014 15:11:24.188621 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b5bb84d98-q657q"] Oct 14 15:11:24 crc kubenswrapper[4860]: I1014 15:11:24.202831 4860 scope.go:117] "RemoveContainer" containerID="c32a404a5bb120548b2b54ea106f642c89c5eb175f2df60ed177c80b5c464560" Oct 14 15:11:25 crc kubenswrapper[4860]: I1014 15:11:25.075402 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" path="/var/lib/kubelet/pods/129a5016-7ba9-4901-abe0-9531c4129a99/volumes" Oct 14 15:11:25 crc kubenswrapper[4860]: I1014 15:11:25.076021 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" path="/var/lib/kubelet/pods/9614a412-49d1-4a0c-8eef-ef10eb7cee37/volumes" Oct 14 15:11:26 crc kubenswrapper[4860]: I1014 15:11:26.400995 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:26 crc kubenswrapper[4860]: I1014 15:11:26.404267 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76b8bb94b7-r2cx7" Oct 14 15:11:27 crc kubenswrapper[4860]: I1014 15:11:27.786703 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 15:11:28 crc kubenswrapper[4860]: I1014 15:11:28.640112 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:11:28 crc kubenswrapper[4860]: I1014 15:11:28.640430 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:11:28 crc kubenswrapper[4860]: I1014 15:11:28.817812 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:11:28 crc kubenswrapper[4860]: I1014 15:11:28.818763 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:11:33 crc kubenswrapper[4860]: W1014 15:11:33.555150 4860 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3753347d_967a_4f1d_afd2_b028a356ff60.slice/crio-d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3753347d_967a_4f1d_afd2_b028a356ff60.slice/crio-d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02.scope: no such file or directory Oct 14 15:11:33 crc kubenswrapper[4860]: W1014 15:11:33.555710 4860 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-conmon-b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-conmon-b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48.scope: no such file or directory Oct 14 15:11:33 crc kubenswrapper[4860]: W1014 15:11:33.555732 4860 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-b307187e5626b878ed7c5208fe5f374f1c8248d75e0d1535792fe3578b64fc48.scope: no such file or directory Oct 14 15:11:33 crc kubenswrapper[4860]: W1014 15:11:33.559929 4860 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-conmon-eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-conmon-eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642.scope: no such file or directory Oct 14 15:11:33 crc kubenswrapper[4860]: W1014 15:11:33.559973 4860 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b323e2f_aa3b_4b88_a2a1_6f492ce0e5d0.slice/crio-eb9f2a94f9f4b2a640a0897706c0caa7c8ccb90b735af70706e0882c45127642.scope: no such file or directory Oct 14 15:11:34 crc kubenswrapper[4860]: I1014 15:11:34.197340 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.167:8776/healthcheck\": dial tcp 10.217.0.167:8776: connect: connection refused" Oct 14 15:11:34 crc kubenswrapper[4860]: I1014 15:11:34.304516 4860 generic.go:334] "Generic (PLEG): container finished" podID="3753347d-967a-4f1d-afd2-b028a356ff60" containerID="d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02" exitCode=137 Oct 14 15:11:34 crc kubenswrapper[4860]: I1014 15:11:34.304575 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3753347d-967a-4f1d-afd2-b028a356ff60","Type":"ContainerDied","Data":"d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02"} Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.316055 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3753347d-967a-4f1d-afd2-b028a356ff60","Type":"ContainerDied","Data":"1b70485682cee01565d1f279517f019d793039080e74bd755ce3af12ce759711"} Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.316435 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b70485682cee01565d1f279517f019d793039080e74bd755ce3af12ce759711" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.442629 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574015 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtqtp\" (UniqueName: \"kubernetes.io/projected/3753347d-967a-4f1d-afd2-b028a356ff60-kube-api-access-vtqtp\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574148 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3753347d-967a-4f1d-afd2-b028a356ff60-logs\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574668 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3753347d-967a-4f1d-afd2-b028a356ff60-logs" (OuterVolumeSpecName: "logs") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574755 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-scripts\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574824 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574881 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3753347d-967a-4f1d-afd2-b028a356ff60-etc-machine-id\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.574946 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data-custom\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.575037 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-combined-ca-bundle\") pod \"3753347d-967a-4f1d-afd2-b028a356ff60\" (UID: \"3753347d-967a-4f1d-afd2-b028a356ff60\") " Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.575399 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3753347d-967a-4f1d-afd2-b028a356ff60-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.576224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3753347d-967a-4f1d-afd2-b028a356ff60-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.626557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-scripts" (OuterVolumeSpecName: "scripts") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.628422 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3753347d-967a-4f1d-afd2-b028a356ff60-kube-api-access-vtqtp" (OuterVolumeSpecName: "kube-api-access-vtqtp") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "kube-api-access-vtqtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.629496 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.655446 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data" (OuterVolumeSpecName: "config-data") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.684641 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3753347d-967a-4f1d-afd2-b028a356ff60-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.684662 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.684671 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtqtp\" (UniqueName: \"kubernetes.io/projected/3753347d-967a-4f1d-afd2-b028a356ff60-kube-api-access-vtqtp\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.684680 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.684690 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.690822 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3753347d-967a-4f1d-afd2-b028a356ff60" (UID: "3753347d-967a-4f1d-afd2-b028a356ff60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:35 crc kubenswrapper[4860]: I1014 15:11:35.786501 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3753347d-967a-4f1d-afd2-b028a356ff60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.325183 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.328203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0923e67e-dcfe-48bd-9987-c24810447a3e","Type":"ContainerStarted","Data":"36d2fbd8455a1d5ae98e89c236d532a789b3b532d700d7227a4e03248439a583"} Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.362891 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.7471697600000002 podStartE2EDuration="22.362870385s" podCreationTimestamp="2025-10-14 15:11:14 +0000 UTC" firstStartedPulling="2025-10-14 15:11:15.647063647 +0000 UTC m=+1337.233847096" lastFinishedPulling="2025-10-14 15:11:35.262764272 +0000 UTC m=+1356.849547721" observedRunningTime="2025-10-14 15:11:36.361915111 +0000 UTC m=+1357.948698580" watchObservedRunningTime="2025-10-14 15:11:36.362870385 +0000 UTC m=+1357.949653834" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.387545 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.401505 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417080 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417519 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417543 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417563 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-notification-agent" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417571 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-notification-agent" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417582 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-central-agent" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417589 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-central-agent" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417611 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="sg-core" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417621 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="sg-core" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417642 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api-log" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417649 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api-log" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417658 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api-log" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417665 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api-log" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417679 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417687 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-api" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417709 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="proxy-httpd" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417716 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="proxy-httpd" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417728 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-httpd" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417737 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-httpd" Oct 14 15:11:36 crc kubenswrapper[4860]: E1014 15:11:36.417753 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417760 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417975 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="proxy-httpd" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.417991 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-notification-agent" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418010 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api-log" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418026 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="sg-core" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418038 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" containerName="cinder-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418072 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418084 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418097 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9614a412-49d1-4a0c-8eef-ef10eb7cee37" containerName="barbican-api-log" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418108 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" containerName="ceilometer-central-agent" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.418122 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="129a5016-7ba9-4901-abe0-9531c4129a99" containerName="neutron-httpd" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.421781 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.429682 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.430068 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.430524 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.453565 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502282 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh27\" (UniqueName: \"kubernetes.io/projected/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-kube-api-access-cbh27\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502336 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502360 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502464 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-logs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502498 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-config-data\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502524 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-scripts\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502549 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.502612 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.604383 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-logs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.604708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-config-data\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.604779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-logs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.604796 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-scripts\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.604873 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.604997 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.605518 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.605579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh27\" (UniqueName: \"kubernetes.io/projected/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-kube-api-access-cbh27\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.605197 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.605652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.605814 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.609887 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.610296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.610491 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-scripts\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.611927 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.612555 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.627326 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh27\" (UniqueName: \"kubernetes.io/projected/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-kube-api-access-cbh27\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.647966 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c38bae-5346-4f5a-ad7c-24f82dd147cf-config-data\") pod \"cinder-api-0\" (UID: \"c1c38bae-5346-4f5a-ad7c-24f82dd147cf\") " pod="openstack/cinder-api-0" Oct 14 15:11:36 crc kubenswrapper[4860]: I1014 15:11:36.758202 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 15:11:37 crc kubenswrapper[4860]: I1014 15:11:37.075543 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3753347d-967a-4f1d-afd2-b028a356ff60" path="/var/lib/kubelet/pods/3753347d-967a-4f1d-afd2-b028a356ff60/volumes" Oct 14 15:11:37 crc kubenswrapper[4860]: I1014 15:11:37.260636 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 15:11:37 crc kubenswrapper[4860]: I1014 15:11:37.340985 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c1c38bae-5346-4f5a-ad7c-24f82dd147cf","Type":"ContainerStarted","Data":"d3cb9b8434366e0d3dd5c82ff04c587201b1365be8a0ee22576156791209e4a2"} Oct 14 15:11:38 crc kubenswrapper[4860]: I1014 15:11:38.355259 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c1c38bae-5346-4f5a-ad7c-24f82dd147cf","Type":"ContainerStarted","Data":"9d34357c1cb2444e8646879f5a246cf23b5d08903cc6ee6c88413a8fe57237e5"} Oct 14 15:11:38 crc kubenswrapper[4860]: I1014 15:11:38.642221 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:11:38 crc kubenswrapper[4860]: I1014 15:11:38.819852 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8795558b4-cgsrj" podUID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Oct 14 15:11:39 crc kubenswrapper[4860]: I1014 15:11:39.372291 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c1c38bae-5346-4f5a-ad7c-24f82dd147cf","Type":"ContainerStarted","Data":"49656a28c9dedbc9df80457553ba697f9262947f22d8a32b9ec8b33cc2ab6f58"} Oct 14 15:11:39 crc kubenswrapper[4860]: I1014 15:11:39.372715 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 15:11:39 crc kubenswrapper[4860]: I1014 15:11:39.413100 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.413077346 podStartE2EDuration="3.413077346s" podCreationTimestamp="2025-10-14 15:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:11:39.396991848 +0000 UTC m=+1360.983775317" watchObservedRunningTime="2025-10-14 15:11:39.413077346 +0000 UTC m=+1360.999860815" Oct 14 15:11:48 crc kubenswrapper[4860]: I1014 15:11:48.642823 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:11:48 crc kubenswrapper[4860]: I1014 15:11:48.821225 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8795558b4-cgsrj" podUID="ba50439f-28b5-4b76-9afb-b705c4037f8d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.460951 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.807222 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ttngb"] Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.814901 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.824109 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ttngb"] Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.906773 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rblbz"] Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.907843 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.921232 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rblbz"] Oct 14 15:11:49 crc kubenswrapper[4860]: I1014 15:11:49.948753 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxg8f\" (UniqueName: \"kubernetes.io/projected/ce4462d4-a6de-4580-bf47-96a2848f3aba-kube-api-access-hxg8f\") pod \"nova-api-db-create-ttngb\" (UID: \"ce4462d4-a6de-4580-bf47-96a2848f3aba\") " pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.001639 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kfbxn"] Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.002924 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.013568 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kfbxn"] Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.050367 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66rp\" (UniqueName: \"kubernetes.io/projected/ae5235d3-2655-428f-bad1-a71c041b1254-kube-api-access-l66rp\") pod \"nova-cell0-db-create-rblbz\" (UID: \"ae5235d3-2655-428f-bad1-a71c041b1254\") " pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.050742 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxg8f\" (UniqueName: \"kubernetes.io/projected/ce4462d4-a6de-4580-bf47-96a2848f3aba-kube-api-access-hxg8f\") pod \"nova-api-db-create-ttngb\" (UID: \"ce4462d4-a6de-4580-bf47-96a2848f3aba\") " pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.072837 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxg8f\" (UniqueName: \"kubernetes.io/projected/ce4462d4-a6de-4580-bf47-96a2848f3aba-kube-api-access-hxg8f\") pod \"nova-api-db-create-ttngb\" (UID: \"ce4462d4-a6de-4580-bf47-96a2848f3aba\") " pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.132618 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.153006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66rp\" (UniqueName: \"kubernetes.io/projected/ae5235d3-2655-428f-bad1-a71c041b1254-kube-api-access-l66rp\") pod \"nova-cell0-db-create-rblbz\" (UID: \"ae5235d3-2655-428f-bad1-a71c041b1254\") " pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.153067 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhgh\" (UniqueName: \"kubernetes.io/projected/43071b38-c290-49b5-ade1-9bd9c623062b-kube-api-access-6hhgh\") pod \"nova-cell1-db-create-kfbxn\" (UID: \"43071b38-c290-49b5-ade1-9bd9c623062b\") " pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.170638 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66rp\" (UniqueName: \"kubernetes.io/projected/ae5235d3-2655-428f-bad1-a71c041b1254-kube-api-access-l66rp\") pod \"nova-cell0-db-create-rblbz\" (UID: \"ae5235d3-2655-428f-bad1-a71c041b1254\") " pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.222730 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.254483 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hhgh\" (UniqueName: \"kubernetes.io/projected/43071b38-c290-49b5-ade1-9bd9c623062b-kube-api-access-6hhgh\") pod \"nova-cell1-db-create-kfbxn\" (UID: \"43071b38-c290-49b5-ade1-9bd9c623062b\") " pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.307779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hhgh\" (UniqueName: \"kubernetes.io/projected/43071b38-c290-49b5-ade1-9bd9c623062b-kube-api-access-6hhgh\") pod \"nova-cell1-db-create-kfbxn\" (UID: \"43071b38-c290-49b5-ade1-9bd9c623062b\") " pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.327447 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.745840 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ttngb"] Oct 14 15:11:50 crc kubenswrapper[4860]: I1014 15:11:50.961583 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rblbz"] Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.091831 4860 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod7ec30a67-6982-40fb-9bf5-8134cefa0429"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod7ec30a67-6982-40fb-9bf5-8134cefa0429] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7ec30a67_6982_40fb_9bf5_8134cefa0429.slice" Oct 14 15:11:51 crc kubenswrapper[4860]: E1014 15:11:51.092133 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod7ec30a67-6982-40fb-9bf5-8134cefa0429] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod7ec30a67-6982-40fb-9bf5-8134cefa0429] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7ec30a67_6982_40fb_9bf5_8134cefa0429.slice" pod="openstack/ceilometer-0" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.232093 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kfbxn"] Oct 14 15:11:51 crc kubenswrapper[4860]: W1014 15:11:51.250402 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43071b38_c290_49b5_ade1_9bd9c623062b.slice/crio-6ade8fe0c1a53ebd23879c58b0325a04d72b4700352059455f80188c40217f99 WatchSource:0}: Error finding container 6ade8fe0c1a53ebd23879c58b0325a04d72b4700352059455f80188c40217f99: Status 404 returned error can't find the container with id 6ade8fe0c1a53ebd23879c58b0325a04d72b4700352059455f80188c40217f99 Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.487714 4860 generic.go:334] "Generic (PLEG): container finished" podID="ae5235d3-2655-428f-bad1-a71c041b1254" containerID="898235bf8c266bffcb5a89422446f3e7b1654c22c51d41d6e14c4aea790eabfc" exitCode=0 Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.487796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rblbz" event={"ID":"ae5235d3-2655-428f-bad1-a71c041b1254","Type":"ContainerDied","Data":"898235bf8c266bffcb5a89422446f3e7b1654c22c51d41d6e14c4aea790eabfc"} Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.487831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rblbz" event={"ID":"ae5235d3-2655-428f-bad1-a71c041b1254","Type":"ContainerStarted","Data":"556530f2cf8972fa1f003d133616bc587bede6a01b240ceaaf80918300fef4ca"} Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.490309 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce4462d4-a6de-4580-bf47-96a2848f3aba" containerID="99420f35be2ab1d26d3d621ada87741153172e0eb6dab6e61286e16d74984f57" exitCode=0 Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.490352 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ttngb" event={"ID":"ce4462d4-a6de-4580-bf47-96a2848f3aba","Type":"ContainerDied","Data":"99420f35be2ab1d26d3d621ada87741153172e0eb6dab6e61286e16d74984f57"} Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.490401 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ttngb" event={"ID":"ce4462d4-a6de-4580-bf47-96a2848f3aba","Type":"ContainerStarted","Data":"4aa8ebd32205f099382e7b3fc6b2a6c1f0e5334b298138a575a64e65a091cdc8"} Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.491852 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfbxn" event={"ID":"43071b38-c290-49b5-ade1-9bd9c623062b","Type":"ContainerStarted","Data":"6ade8fe0c1a53ebd23879c58b0325a04d72b4700352059455f80188c40217f99"} Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.491883 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.592719 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.623246 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.657780 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.670791 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.686195 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.687144 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.688345 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.800810 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.801139 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-config-data\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.801195 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.801212 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-run-httpd\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.801243 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdrmt\" (UniqueName: \"kubernetes.io/projected/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-kube-api-access-rdrmt\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.801279 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-scripts\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.801307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-log-httpd\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.902798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-log-httpd\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.902908 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.902940 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-config-data\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.903009 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.903052 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-run-httpd\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.903098 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdrmt\" (UniqueName: \"kubernetes.io/projected/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-kube-api-access-rdrmt\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.903152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-scripts\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.903510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-run-httpd\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.904263 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-log-httpd\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.913135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.913811 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.915800 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-config-data\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.920233 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-scripts\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.923651 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdrmt\" (UniqueName: \"kubernetes.io/projected/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-kube-api-access-rdrmt\") pod \"ceilometer-0\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " pod="openstack/ceilometer-0" Oct 14 15:11:51 crc kubenswrapper[4860]: I1014 15:11:51.995228 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.457757 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.471923 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.503275 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerStarted","Data":"cac5c9fd7ad1e6ed51afccfe4b6b2657fd2b06e2c091eb998ed5b1c608ebffa7"} Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.505835 4860 generic.go:334] "Generic (PLEG): container finished" podID="43071b38-c290-49b5-ade1-9bd9c623062b" containerID="9a0a214ff333160d39f05c10155786f3ebbf76ef17160ce3067a11df6b4c16be" exitCode=0 Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.505939 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfbxn" event={"ID":"43071b38-c290-49b5-ade1-9bd9c623062b","Type":"ContainerDied","Data":"9a0a214ff333160d39f05c10155786f3ebbf76ef17160ce3067a11df6b4c16be"} Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.783160 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.785504 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-log" containerID="cri-o://fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9" gracePeriod=30 Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.786299 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-httpd" containerID="cri-o://6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108" gracePeriod=30 Oct 14 15:11:52 crc kubenswrapper[4860]: I1014 15:11:52.996457 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.003236 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.075435 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec30a67-6982-40fb-9bf5-8134cefa0429" path="/var/lib/kubelet/pods/7ec30a67-6982-40fb-9bf5-8134cefa0429/volumes" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.135234 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l66rp\" (UniqueName: \"kubernetes.io/projected/ae5235d3-2655-428f-bad1-a71c041b1254-kube-api-access-l66rp\") pod \"ae5235d3-2655-428f-bad1-a71c041b1254\" (UID: \"ae5235d3-2655-428f-bad1-a71c041b1254\") " Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.135339 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxg8f\" (UniqueName: \"kubernetes.io/projected/ce4462d4-a6de-4580-bf47-96a2848f3aba-kube-api-access-hxg8f\") pod \"ce4462d4-a6de-4580-bf47-96a2848f3aba\" (UID: \"ce4462d4-a6de-4580-bf47-96a2848f3aba\") " Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.139594 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4462d4-a6de-4580-bf47-96a2848f3aba-kube-api-access-hxg8f" (OuterVolumeSpecName: "kube-api-access-hxg8f") pod "ce4462d4-a6de-4580-bf47-96a2848f3aba" (UID: "ce4462d4-a6de-4580-bf47-96a2848f3aba"). InnerVolumeSpecName "kube-api-access-hxg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.141182 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5235d3-2655-428f-bad1-a71c041b1254-kube-api-access-l66rp" (OuterVolumeSpecName: "kube-api-access-l66rp") pod "ae5235d3-2655-428f-bad1-a71c041b1254" (UID: "ae5235d3-2655-428f-bad1-a71c041b1254"). InnerVolumeSpecName "kube-api-access-l66rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.237723 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l66rp\" (UniqueName: \"kubernetes.io/projected/ae5235d3-2655-428f-bad1-a71c041b1254-kube-api-access-l66rp\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.237750 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxg8f\" (UniqueName: \"kubernetes.io/projected/ce4462d4-a6de-4580-bf47-96a2848f3aba-kube-api-access-hxg8f\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.514887 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ttngb" event={"ID":"ce4462d4-a6de-4580-bf47-96a2848f3aba","Type":"ContainerDied","Data":"4aa8ebd32205f099382e7b3fc6b2a6c1f0e5334b298138a575a64e65a091cdc8"} Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.514933 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa8ebd32205f099382e7b3fc6b2a6c1f0e5334b298138a575a64e65a091cdc8" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.514999 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ttngb" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.520440 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerStarted","Data":"2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac"} Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.522293 4860 generic.go:334] "Generic (PLEG): container finished" podID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerID="fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9" exitCode=143 Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.522371 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1deda631-ca4a-40fe-95ce-a2c602baa9e7","Type":"ContainerDied","Data":"fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9"} Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.523688 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rblbz" event={"ID":"ae5235d3-2655-428f-bad1-a71c041b1254","Type":"ContainerDied","Data":"556530f2cf8972fa1f003d133616bc587bede6a01b240ceaaf80918300fef4ca"} Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.523713 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rblbz" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.523715 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556530f2cf8972fa1f003d133616bc587bede6a01b240ceaaf80918300fef4ca" Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.587639 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.588244 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-log" containerID="cri-o://9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42" gracePeriod=30 Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.588288 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-httpd" containerID="cri-o://f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4" gracePeriod=30 Oct 14 15:11:53 crc kubenswrapper[4860]: I1014 15:11:53.932516 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.054478 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hhgh\" (UniqueName: \"kubernetes.io/projected/43071b38-c290-49b5-ade1-9bd9c623062b-kube-api-access-6hhgh\") pod \"43071b38-c290-49b5-ade1-9bd9c623062b\" (UID: \"43071b38-c290-49b5-ade1-9bd9c623062b\") " Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.063062 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43071b38-c290-49b5-ade1-9bd9c623062b-kube-api-access-6hhgh" (OuterVolumeSpecName: "kube-api-access-6hhgh") pod "43071b38-c290-49b5-ade1-9bd9c623062b" (UID: "43071b38-c290-49b5-ade1-9bd9c623062b"). InnerVolumeSpecName "kube-api-access-6hhgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.156364 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hhgh\" (UniqueName: \"kubernetes.io/projected/43071b38-c290-49b5-ade1-9bd9c623062b-kube-api-access-6hhgh\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.533257 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerStarted","Data":"7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008"} Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.534778 4860 generic.go:334] "Generic (PLEG): container finished" podID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerID="9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42" exitCode=143 Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.534913 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb464cdf-6fb0-4ed3-9c3d-2a505478def4","Type":"ContainerDied","Data":"9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42"} Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.535959 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfbxn" event={"ID":"43071b38-c290-49b5-ade1-9bd9c623062b","Type":"ContainerDied","Data":"6ade8fe0c1a53ebd23879c58b0325a04d72b4700352059455f80188c40217f99"} Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.536054 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ade8fe0c1a53ebd23879c58b0325a04d72b4700352059455f80188c40217f99" Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.536179 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfbxn" Oct 14 15:11:54 crc kubenswrapper[4860]: I1014 15:11:54.624235 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:11:55 crc kubenswrapper[4860]: I1014 15:11:55.547064 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerStarted","Data":"899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498"} Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.478087 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.561051 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerStarted","Data":"74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b"} Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.561117 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-central-agent" containerID="cri-o://2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac" gracePeriod=30 Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.561162 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.561216 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="proxy-httpd" containerID="cri-o://74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b" gracePeriod=30 Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.561252 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="sg-core" containerID="cri-o://899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498" gracePeriod=30 Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.561282 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-notification-agent" containerID="cri-o://7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008" gracePeriod=30 Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.563422 4860 generic.go:334] "Generic (PLEG): container finished" podID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerID="6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108" exitCode=0 Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.563450 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1deda631-ca4a-40fe-95ce-a2c602baa9e7","Type":"ContainerDied","Data":"6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108"} Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.563506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1deda631-ca4a-40fe-95ce-a2c602baa9e7","Type":"ContainerDied","Data":"308438ecf338dad7e9e9636a1b00e163376b94d56b45a1ee44a494b8f3714889"} Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.563525 4860 scope.go:117] "RemoveContainer" containerID="6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.563672 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.598717 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-config-data\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.598792 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-logs\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.598879 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-scripts\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.598948 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-public-tls-certs\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.598979 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.599021 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-combined-ca-bundle\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.599067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfth6\" (UniqueName: \"kubernetes.io/projected/1deda631-ca4a-40fe-95ce-a2c602baa9e7-kube-api-access-zfth6\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.599155 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-httpd-run\") pod \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\" (UID: \"1deda631-ca4a-40fe-95ce-a2c602baa9e7\") " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.600246 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.602482 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-logs" (OuterVolumeSpecName: "logs") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.605520 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.970868702 podStartE2EDuration="5.605497386s" podCreationTimestamp="2025-10-14 15:11:51 +0000 UTC" firstStartedPulling="2025-10-14 15:11:52.471737755 +0000 UTC m=+1374.058521204" lastFinishedPulling="2025-10-14 15:11:56.106366439 +0000 UTC m=+1377.693149888" observedRunningTime="2025-10-14 15:11:56.596385926 +0000 UTC m=+1378.183169375" watchObservedRunningTime="2025-10-14 15:11:56.605497386 +0000 UTC m=+1378.192280835" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.607689 4860 scope.go:117] "RemoveContainer" containerID="fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.610969 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1deda631-ca4a-40fe-95ce-a2c602baa9e7-kube-api-access-zfth6" (OuterVolumeSpecName: "kube-api-access-zfth6") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "kube-api-access-zfth6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.618772 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-scripts" (OuterVolumeSpecName: "scripts") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.631883 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.641239 4860 scope.go:117] "RemoveContainer" containerID="6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108" Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.644471 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108\": container with ID starting with 6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108 not found: ID does not exist" containerID="6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.644520 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108"} err="failed to get container status \"6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108\": rpc error: code = NotFound desc = could not find container \"6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108\": container with ID starting with 6a979fc0b4003ae897247f28fb3d230754aca3d55d39714c78b7a924ed7a4108 not found: ID does not exist" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.644550 4860 scope.go:117] "RemoveContainer" containerID="fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9" Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.645082 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9\": container with ID starting with fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9 not found: ID does not exist" containerID="fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.645132 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9"} err="failed to get container status \"fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9\": rpc error: code = NotFound desc = could not find container \"fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9\": container with ID starting with fc5c613054f1d23149895761bc026e2c7a60b4d502b8a07203dbd6a2fdee1eb9 not found: ID does not exist" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.681737 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-config-data" (OuterVolumeSpecName: "config-data") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.688386 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.699124 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1deda631-ca4a-40fe-95ce-a2c602baa9e7" (UID: "1deda631-ca4a-40fe-95ce-a2c602baa9e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702709 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702749 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702786 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702799 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702811 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfth6\" (UniqueName: \"kubernetes.io/projected/1deda631-ca4a-40fe-95ce-a2c602baa9e7-kube-api-access-zfth6\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702823 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702834 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1deda631-ca4a-40fe-95ce-a2c602baa9e7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.702844 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1deda631-ca4a-40fe-95ce-a2c602baa9e7-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.741691 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.805106 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.905065 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.918863 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.927746 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.928119 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4462d4-a6de-4580-bf47-96a2848f3aba" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.928137 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4462d4-a6de-4580-bf47-96a2848f3aba" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.928155 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-log" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.928161 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-log" Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.928180 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5235d3-2655-428f-bad1-a71c041b1254" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.928186 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5235d3-2655-428f-bad1-a71c041b1254" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.928196 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-httpd" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.928202 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-httpd" Oct 14 15:11:56 crc kubenswrapper[4860]: E1014 15:11:56.928212 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43071b38-c290-49b5-ade1-9bd9c623062b" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.928218 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="43071b38-c290-49b5-ade1-9bd9c623062b" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.929295 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5235d3-2655-428f-bad1-a71c041b1254" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.929318 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-httpd" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.929334 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4462d4-a6de-4580-bf47-96a2848f3aba" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.929348 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" containerName="glance-log" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.929368 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="43071b38-c290-49b5-ade1-9bd9c623062b" containerName="mariadb-database-create" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.940155 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.942798 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.944160 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 15:11:56 crc kubenswrapper[4860]: I1014 15:11:56.945918 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.008401 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fns5w\" (UniqueName: \"kubernetes.io/projected/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-kube-api-access-fns5w\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.008764 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.008796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.008893 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.008940 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-config-data\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.008982 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-scripts\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.009007 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-logs\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.009107 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.105049 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1deda631-ca4a-40fe-95ce-a2c602baa9e7" path="/var/lib/kubelet/pods/1deda631-ca4a-40fe-95ce-a2c602baa9e7/volumes" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110669 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-config-data\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110691 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-scripts\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-logs\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110750 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110799 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fns5w\" (UniqueName: \"kubernetes.io/projected/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-kube-api-access-fns5w\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110814 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.110832 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.111588 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.112357 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-logs\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.112831 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.119954 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.124564 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-scripts\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.128327 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.129114 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-config-data\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.144616 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fns5w\" (UniqueName: \"kubernetes.io/projected/16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263-kube-api-access-fns5w\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.168549 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263\") " pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.298949 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.399143 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563357 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-httpd-run\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563419 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563452 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-internal-tls-certs\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563499 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-scripts\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563515 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzbm\" (UniqueName: \"kubernetes.io/projected/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-kube-api-access-bvzbm\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563555 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-combined-ca-bundle\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563633 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-logs\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.563654 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-config-data\") pod \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\" (UID: \"eb464cdf-6fb0-4ed3-9c3d-2a505478def4\") " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.566475 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.571688 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-logs" (OuterVolumeSpecName: "logs") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.573043 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-scripts" (OuterVolumeSpecName: "scripts") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.579078 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-kube-api-access-bvzbm" (OuterVolumeSpecName: "kube-api-access-bvzbm") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "kube-api-access-bvzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.585300 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.587333 4860 generic.go:334] "Generic (PLEG): container finished" podID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerID="899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498" exitCode=2 Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.587391 4860 generic.go:334] "Generic (PLEG): container finished" podID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerID="7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008" exitCode=0 Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.587540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerDied","Data":"899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498"} Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.587595 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerDied","Data":"7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008"} Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.591359 4860 generic.go:334] "Generic (PLEG): container finished" podID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerID="f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4" exitCode=0 Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.591435 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.591472 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb464cdf-6fb0-4ed3-9c3d-2a505478def4","Type":"ContainerDied","Data":"f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4"} Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.591505 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb464cdf-6fb0-4ed3-9c3d-2a505478def4","Type":"ContainerDied","Data":"9ab8cc2877269083d833b356fe8a698a48148cca15b85ee1a014da197de31c63"} Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.591526 4860 scope.go:117] "RemoveContainer" containerID="f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.639823 4860 scope.go:117] "RemoveContainer" containerID="9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.640752 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.647353 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.665982 4860 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.666045 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.666058 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.666071 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.666080 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzbm\" (UniqueName: \"kubernetes.io/projected/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-kube-api-access-bvzbm\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.666090 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.666101 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.680952 4860 scope.go:117] "RemoveContainer" containerID="f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4" Oct 14 15:11:57 crc kubenswrapper[4860]: E1014 15:11:57.681552 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4\": container with ID starting with f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4 not found: ID does not exist" containerID="f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.681604 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4"} err="failed to get container status \"f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4\": rpc error: code = NotFound desc = could not find container \"f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4\": container with ID starting with f231a31bb7b2cc63a356540ef7fce623cb73366c9c880f0c061d1ba16a9300b4 not found: ID does not exist" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.681630 4860 scope.go:117] "RemoveContainer" containerID="9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42" Oct 14 15:11:57 crc kubenswrapper[4860]: E1014 15:11:57.681993 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42\": container with ID starting with 9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42 not found: ID does not exist" containerID="9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.682038 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42"} err="failed to get container status \"9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42\": rpc error: code = NotFound desc = could not find container \"9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42\": container with ID starting with 9dd3219cdd88585ae9052be0ae0473c158eb9f1f6690446dc4b01b27eb4a4f42 not found: ID does not exist" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.696793 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-config-data" (OuterVolumeSpecName: "config-data") pod "eb464cdf-6fb0-4ed3-9c3d-2a505478def4" (UID: "eb464cdf-6fb0-4ed3-9c3d-2a505478def4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.700183 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.767400 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb464cdf-6fb0-4ed3-9c3d-2a505478def4-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.767457 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.923435 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.932810 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.958523 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:11:57 crc kubenswrapper[4860]: E1014 15:11:57.968896 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-log" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.968933 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-log" Oct 14 15:11:57 crc kubenswrapper[4860]: E1014 15:11:57.968948 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-httpd" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.968956 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-httpd" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.969255 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-log" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.969281 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" containerName="glance-httpd" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.970274 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.973517 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.973731 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 15:11:57 crc kubenswrapper[4860]: I1014 15:11:57.990089 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 15:11:57 crc kubenswrapper[4860]: W1014 15:11:57.991484 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16fc54e6_69a7_4cd1_8cf0_e7a7c7b22263.slice/crio-016a4fe39ce448dc945029571eb579022be628713cfd0965fbe73f9aa71f9f5a WatchSource:0}: Error finding container 016a4fe39ce448dc945029571eb579022be628713cfd0965fbe73f9aa71f9f5a: Status 404 returned error can't find the container with id 016a4fe39ce448dc945029571eb579022be628713cfd0965fbe73f9aa71f9f5a Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.032594 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073102 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073196 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073225 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073272 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073296 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073321 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4rw\" (UniqueName: \"kubernetes.io/projected/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-kube-api-access-wr4rw\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073385 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.073413 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.176889 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.176948 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.176998 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.177093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.177126 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.177187 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.177205 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.177227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4rw\" (UniqueName: \"kubernetes.io/projected/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-kube-api-access-wr4rw\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.179552 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.180232 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.186525 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.199252 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.200146 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.210469 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4rw\" (UniqueName: \"kubernetes.io/projected/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-kube-api-access-wr4rw\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.211006 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.211464 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eea5159-5fa7-4ef7-a7c3-4f98d05085e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.243795 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3\") " pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.298439 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.640925 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263","Type":"ContainerStarted","Data":"016a4fe39ce448dc945029571eb579022be628713cfd0965fbe73f9aa71f9f5a"} Oct 14 15:11:58 crc kubenswrapper[4860]: I1014 15:11:58.873376 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 15:11:58 crc kubenswrapper[4860]: W1014 15:11:58.887641 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eea5159_5fa7_4ef7_a7c3_4f98d05085e3.slice/crio-b7d27e57156956d1051a1c49f7c2abe95d23c91f54fd264a2484af99643f403f WatchSource:0}: Error finding container b7d27e57156956d1051a1c49f7c2abe95d23c91f54fd264a2484af99643f403f: Status 404 returned error can't find the container with id b7d27e57156956d1051a1c49f7c2abe95d23c91f54fd264a2484af99643f403f Oct 14 15:11:59 crc kubenswrapper[4860]: I1014 15:11:59.080422 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb464cdf-6fb0-4ed3-9c3d-2a505478def4" path="/var/lib/kubelet/pods/eb464cdf-6fb0-4ed3-9c3d-2a505478def4/volumes" Oct 14 15:11:59 crc kubenswrapper[4860]: I1014 15:11:59.684439 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3","Type":"ContainerStarted","Data":"b7d27e57156956d1051a1c49f7c2abe95d23c91f54fd264a2484af99643f403f"} Oct 14 15:11:59 crc kubenswrapper[4860]: I1014 15:11:59.707355 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263","Type":"ContainerStarted","Data":"c6445a99732d48ee884193b0926dcf8a61b612dddc29908e8e36a0fb5cc46750"} Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.142216 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dedf-account-create-q9kv8"] Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.145039 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.160382 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dedf-account-create-q9kv8"] Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.169295 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.260899 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfz4\" (UniqueName: \"kubernetes.io/projected/5b949f2b-c0a2-4371-b9ee-8eea850586b1-kube-api-access-njfz4\") pod \"nova-api-dedf-account-create-q9kv8\" (UID: \"5b949f2b-c0a2-4371-b9ee-8eea850586b1\") " pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.348865 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d11f-account-create-9mhcb"] Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.351307 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.372233 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.373260 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njfz4\" (UniqueName: \"kubernetes.io/projected/5b949f2b-c0a2-4371-b9ee-8eea850586b1-kube-api-access-njfz4\") pod \"nova-api-dedf-account-create-q9kv8\" (UID: \"5b949f2b-c0a2-4371-b9ee-8eea850586b1\") " pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.391410 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d11f-account-create-9mhcb"] Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.399708 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njfz4\" (UniqueName: \"kubernetes.io/projected/5b949f2b-c0a2-4371-b9ee-8eea850586b1-kube-api-access-njfz4\") pod \"nova-api-dedf-account-create-q9kv8\" (UID: \"5b949f2b-c0a2-4371-b9ee-8eea850586b1\") " pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.475320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxzcd\" (UniqueName: \"kubernetes.io/projected/ea90be69-850d-4707-8931-d91bed695f91-kube-api-access-xxzcd\") pod \"nova-cell0-d11f-account-create-9mhcb\" (UID: \"ea90be69-850d-4707-8931-d91bed695f91\") " pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.547176 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fa20-account-create-k4vdx"] Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.548278 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.551334 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.556903 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.564573 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fa20-account-create-k4vdx"] Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.579568 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxzcd\" (UniqueName: \"kubernetes.io/projected/ea90be69-850d-4707-8931-d91bed695f91-kube-api-access-xxzcd\") pod \"nova-cell0-d11f-account-create-9mhcb\" (UID: \"ea90be69-850d-4707-8931-d91bed695f91\") " pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.598664 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxzcd\" (UniqueName: \"kubernetes.io/projected/ea90be69-850d-4707-8931-d91bed695f91-kube-api-access-xxzcd\") pod \"nova-cell0-d11f-account-create-9mhcb\" (UID: \"ea90be69-850d-4707-8931-d91bed695f91\") " pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.685317 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mdh\" (UniqueName: \"kubernetes.io/projected/dfcfb6eb-044a-4f21-b60b-333306949a88-kube-api-access-m5mdh\") pod \"nova-cell1-fa20-account-create-k4vdx\" (UID: \"dfcfb6eb-044a-4f21-b60b-333306949a88\") " pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.695430 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.787908 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5mdh\" (UniqueName: \"kubernetes.io/projected/dfcfb6eb-044a-4f21-b60b-333306949a88-kube-api-access-m5mdh\") pod \"nova-cell1-fa20-account-create-k4vdx\" (UID: \"dfcfb6eb-044a-4f21-b60b-333306949a88\") " pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.794352 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263","Type":"ContainerStarted","Data":"ffac8f2331d94f619388191641ecaec1e4bf9ef4373430ad90fa6aa3df392c67"} Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.803064 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3","Type":"ContainerStarted","Data":"71d9c15e657a016b48f4a023c8e5569da378263f5908de18ad7886fbcef9528d"} Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.815827 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5mdh\" (UniqueName: \"kubernetes.io/projected/dfcfb6eb-044a-4f21-b60b-333306949a88-kube-api-access-m5mdh\") pod \"nova-cell1-fa20-account-create-k4vdx\" (UID: \"dfcfb6eb-044a-4f21-b60b-333306949a88\") " pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.830822 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.830804121 podStartE2EDuration="4.830804121s" podCreationTimestamp="2025-10-14 15:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:12:00.817574931 +0000 UTC m=+1382.404358380" watchObservedRunningTime="2025-10-14 15:12:00.830804121 +0000 UTC m=+1382.417587570" Oct 14 15:12:00 crc kubenswrapper[4860]: I1014 15:12:00.864906 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.178419 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dedf-account-create-q9kv8"] Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.379500 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d11f-account-create-9mhcb"] Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.523666 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fa20-account-create-k4vdx"] Oct 14 15:12:01 crc kubenswrapper[4860]: W1014 15:12:01.530246 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfcfb6eb_044a_4f21_b60b_333306949a88.slice/crio-4d709574bc96728c8796f95020fb66621e9a4a269eab9f225f254a6b81f8832d WatchSource:0}: Error finding container 4d709574bc96728c8796f95020fb66621e9a4a269eab9f225f254a6b81f8832d: Status 404 returned error can't find the container with id 4d709574bc96728c8796f95020fb66621e9a4a269eab9f225f254a6b81f8832d Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.814138 4860 generic.go:334] "Generic (PLEG): container finished" podID="dfcfb6eb-044a-4f21-b60b-333306949a88" containerID="4768e56088d10074a60b9e0400adb465346357b0c57d0bade63f6456b64becf2" exitCode=0 Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.814218 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fa20-account-create-k4vdx" event={"ID":"dfcfb6eb-044a-4f21-b60b-333306949a88","Type":"ContainerDied","Data":"4768e56088d10074a60b9e0400adb465346357b0c57d0bade63f6456b64becf2"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.814249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fa20-account-create-k4vdx" event={"ID":"dfcfb6eb-044a-4f21-b60b-333306949a88","Type":"ContainerStarted","Data":"4d709574bc96728c8796f95020fb66621e9a4a269eab9f225f254a6b81f8832d"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.816346 4860 generic.go:334] "Generic (PLEG): container finished" podID="ea90be69-850d-4707-8931-d91bed695f91" containerID="ec320ee991fee7e30ab4ea84e757e483e72571a3789f13b056330b85d8325b8a" exitCode=0 Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.816388 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d11f-account-create-9mhcb" event={"ID":"ea90be69-850d-4707-8931-d91bed695f91","Type":"ContainerDied","Data":"ec320ee991fee7e30ab4ea84e757e483e72571a3789f13b056330b85d8325b8a"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.816457 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d11f-account-create-9mhcb" event={"ID":"ea90be69-850d-4707-8931-d91bed695f91","Type":"ContainerStarted","Data":"80a5729050f62bcec1b67a5135f848ab544301cbff6ab79a4b96fa17f851064f"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.818394 4860 generic.go:334] "Generic (PLEG): container finished" podID="5b949f2b-c0a2-4371-b9ee-8eea850586b1" containerID="9041798651ff223962c93c6e70dfb10a5364fba0d0c7248816b7f64a044507fb" exitCode=0 Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.818425 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dedf-account-create-q9kv8" event={"ID":"5b949f2b-c0a2-4371-b9ee-8eea850586b1","Type":"ContainerDied","Data":"9041798651ff223962c93c6e70dfb10a5364fba0d0c7248816b7f64a044507fb"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.818458 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dedf-account-create-q9kv8" event={"ID":"5b949f2b-c0a2-4371-b9ee-8eea850586b1","Type":"ContainerStarted","Data":"5d6675f166b6c13347ecc50e8ba3291566ba2c3944c7164ed644c9e8085d25f7"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.823242 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9eea5159-5fa7-4ef7-a7c3-4f98d05085e3","Type":"ContainerStarted","Data":"b36fa8037bd1e0ad62b47a62a191885bd2dac21729cbc66ae9ba5c8c55f16bc9"} Oct 14 15:12:01 crc kubenswrapper[4860]: I1014 15:12:01.854835 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.8548145179999995 podStartE2EDuration="4.854814518s" podCreationTimestamp="2025-10-14 15:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:12:01.851239602 +0000 UTC m=+1383.438023061" watchObservedRunningTime="2025-10-14 15:12:01.854814518 +0000 UTC m=+1383.441597977" Oct 14 15:12:02 crc kubenswrapper[4860]: I1014 15:12:02.587568 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:12:02 crc kubenswrapper[4860]: I1014 15:12:02.596313 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.533487 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.542677 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.554801 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.715135 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njfz4\" (UniqueName: \"kubernetes.io/projected/5b949f2b-c0a2-4371-b9ee-8eea850586b1-kube-api-access-njfz4\") pod \"5b949f2b-c0a2-4371-b9ee-8eea850586b1\" (UID: \"5b949f2b-c0a2-4371-b9ee-8eea850586b1\") " Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.715202 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5mdh\" (UniqueName: \"kubernetes.io/projected/dfcfb6eb-044a-4f21-b60b-333306949a88-kube-api-access-m5mdh\") pod \"dfcfb6eb-044a-4f21-b60b-333306949a88\" (UID: \"dfcfb6eb-044a-4f21-b60b-333306949a88\") " Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.715300 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxzcd\" (UniqueName: \"kubernetes.io/projected/ea90be69-850d-4707-8931-d91bed695f91-kube-api-access-xxzcd\") pod \"ea90be69-850d-4707-8931-d91bed695f91\" (UID: \"ea90be69-850d-4707-8931-d91bed695f91\") " Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.722792 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b949f2b-c0a2-4371-b9ee-8eea850586b1-kube-api-access-njfz4" (OuterVolumeSpecName: "kube-api-access-njfz4") pod "5b949f2b-c0a2-4371-b9ee-8eea850586b1" (UID: "5b949f2b-c0a2-4371-b9ee-8eea850586b1"). InnerVolumeSpecName "kube-api-access-njfz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.722843 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcfb6eb-044a-4f21-b60b-333306949a88-kube-api-access-m5mdh" (OuterVolumeSpecName: "kube-api-access-m5mdh") pod "dfcfb6eb-044a-4f21-b60b-333306949a88" (UID: "dfcfb6eb-044a-4f21-b60b-333306949a88"). InnerVolumeSpecName "kube-api-access-m5mdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.723289 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea90be69-850d-4707-8931-d91bed695f91-kube-api-access-xxzcd" (OuterVolumeSpecName: "kube-api-access-xxzcd") pod "ea90be69-850d-4707-8931-d91bed695f91" (UID: "ea90be69-850d-4707-8931-d91bed695f91"). InnerVolumeSpecName "kube-api-access-xxzcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.817386 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxzcd\" (UniqueName: \"kubernetes.io/projected/ea90be69-850d-4707-8931-d91bed695f91-kube-api-access-xxzcd\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.817417 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njfz4\" (UniqueName: \"kubernetes.io/projected/5b949f2b-c0a2-4371-b9ee-8eea850586b1-kube-api-access-njfz4\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.817427 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5mdh\" (UniqueName: \"kubernetes.io/projected/dfcfb6eb-044a-4f21-b60b-333306949a88-kube-api-access-m5mdh\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.845347 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dedf-account-create-q9kv8" event={"ID":"5b949f2b-c0a2-4371-b9ee-8eea850586b1","Type":"ContainerDied","Data":"5d6675f166b6c13347ecc50e8ba3291566ba2c3944c7164ed644c9e8085d25f7"} Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.845382 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d6675f166b6c13347ecc50e8ba3291566ba2c3944c7164ed644c9e8085d25f7" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.845392 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dedf-account-create-q9kv8" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.864413 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fa20-account-create-k4vdx" event={"ID":"dfcfb6eb-044a-4f21-b60b-333306949a88","Type":"ContainerDied","Data":"4d709574bc96728c8796f95020fb66621e9a4a269eab9f225f254a6b81f8832d"} Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.864448 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d709574bc96728c8796f95020fb66621e9a4a269eab9f225f254a6b81f8832d" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.864501 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fa20-account-create-k4vdx" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.870933 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d11f-account-create-9mhcb" event={"ID":"ea90be69-850d-4707-8931-d91bed695f91","Type":"ContainerDied","Data":"80a5729050f62bcec1b67a5135f848ab544301cbff6ab79a4b96fa17f851064f"} Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.870974 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80a5729050f62bcec1b67a5135f848ab544301cbff6ab79a4b96fa17f851064f" Oct 14 15:12:03 crc kubenswrapper[4860]: I1014 15:12:03.871073 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d11f-account-create-9mhcb" Oct 14 15:12:04 crc kubenswrapper[4860]: I1014 15:12:04.655385 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8795558b4-cgsrj" Oct 14 15:12:04 crc kubenswrapper[4860]: I1014 15:12:04.721305 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd7969c76-f8cq5"] Oct 14 15:12:04 crc kubenswrapper[4860]: I1014 15:12:04.721507 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon-log" containerID="cri-o://c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c" gracePeriod=30 Oct 14 15:12:04 crc kubenswrapper[4860]: I1014 15:12:04.721909 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" containerID="cri-o://bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249" gracePeriod=30 Oct 14 15:12:04 crc kubenswrapper[4860]: I1014 15:12:04.737274 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.618798 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvtwm"] Oct 14 15:12:05 crc kubenswrapper[4860]: E1014 15:12:05.619903 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcfb6eb-044a-4f21-b60b-333306949a88" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.619981 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcfb6eb-044a-4f21-b60b-333306949a88" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: E1014 15:12:05.620062 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea90be69-850d-4707-8931-d91bed695f91" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.620115 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea90be69-850d-4707-8931-d91bed695f91" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: E1014 15:12:05.620183 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b949f2b-c0a2-4371-b9ee-8eea850586b1" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.620239 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b949f2b-c0a2-4371-b9ee-8eea850586b1" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.620460 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcfb6eb-044a-4f21-b60b-333306949a88" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.620531 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea90be69-850d-4707-8931-d91bed695f91" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.620594 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b949f2b-c0a2-4371-b9ee-8eea850586b1" containerName="mariadb-account-create" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.621537 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.625318 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftthk" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.628414 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.633677 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvtwm"] Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.636691 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.761697 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-config-data\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.761811 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-scripts\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.761857 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.761882 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vzlx\" (UniqueName: \"kubernetes.io/projected/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-kube-api-access-6vzlx\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.863841 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-config-data\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.863929 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-scripts\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.863964 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.863985 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vzlx\" (UniqueName: \"kubernetes.io/projected/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-kube-api-access-6vzlx\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.873702 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-scripts\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.875603 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.882114 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-config-data\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.882588 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vzlx\" (UniqueName: \"kubernetes.io/projected/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-kube-api-access-6vzlx\") pod \"nova-cell0-conductor-db-sync-dvtwm\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.908337 4860 generic.go:334] "Generic (PLEG): container finished" podID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerID="2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac" exitCode=0 Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.908377 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerDied","Data":"2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac"} Oct 14 15:12:05 crc kubenswrapper[4860]: I1014 15:12:05.939563 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:06 crc kubenswrapper[4860]: I1014 15:12:06.426294 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvtwm"] Oct 14 15:12:06 crc kubenswrapper[4860]: W1014 15:12:06.436697 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a0a3f5b_875c_49b4_8649_ed231cbb71c0.slice/crio-ddd0ac6086436c2c766e85f0374a2159bf8dc54ed83fdccb133a243be21b25de WatchSource:0}: Error finding container ddd0ac6086436c2c766e85f0374a2159bf8dc54ed83fdccb133a243be21b25de: Status 404 returned error can't find the container with id ddd0ac6086436c2c766e85f0374a2159bf8dc54ed83fdccb133a243be21b25de Oct 14 15:12:06 crc kubenswrapper[4860]: I1014 15:12:06.918561 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" event={"ID":"3a0a3f5b-875c-49b4-8649-ed231cbb71c0","Type":"ContainerStarted","Data":"ddd0ac6086436c2c766e85f0374a2159bf8dc54ed83fdccb133a243be21b25de"} Oct 14 15:12:07 crc kubenswrapper[4860]: I1014 15:12:07.299978 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 15:12:07 crc kubenswrapper[4860]: I1014 15:12:07.300040 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 15:12:07 crc kubenswrapper[4860]: I1014 15:12:07.330607 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 15:12:07 crc kubenswrapper[4860]: I1014 15:12:07.343563 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 15:12:07 crc kubenswrapper[4860]: I1014 15:12:07.927873 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 15:12:07 crc kubenswrapper[4860]: I1014 15:12:07.927921 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.120547 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:33252->10.217.0.150:8443: read: connection reset by peer" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.298763 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.299163 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.336905 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.351839 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.641106 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.941291 4860 generic.go:334] "Generic (PLEG): container finished" podID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerID="bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249" exitCode=0 Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.941406 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerDied","Data":"bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249"} Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.941904 4860 scope.go:117] "RemoveContainer" containerID="48c829aeecd60e8eb72c1f7f8f0dd773866393ac607409fd129497c22dfd7dfc" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.942293 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:08 crc kubenswrapper[4860]: I1014 15:12:08.942340 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:10 crc kubenswrapper[4860]: I1014 15:12:10.266304 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 15:12:10 crc kubenswrapper[4860]: I1014 15:12:10.266703 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 15:12:10 crc kubenswrapper[4860]: I1014 15:12:10.269214 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 15:12:11 crc kubenswrapper[4860]: I1014 15:12:11.535780 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:11 crc kubenswrapper[4860]: I1014 15:12:11.536293 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 15:12:11 crc kubenswrapper[4860]: I1014 15:12:11.538518 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 15:12:15 crc kubenswrapper[4860]: I1014 15:12:15.012669 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" event={"ID":"3a0a3f5b-875c-49b4-8649-ed231cbb71c0","Type":"ContainerStarted","Data":"6ad05a8e79f65e07a7c0435d2adc192c5d9aa1507b50627b41120ab66467bb0e"} Oct 14 15:12:15 crc kubenswrapper[4860]: I1014 15:12:15.029775 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" podStartSLOduration=1.99654036 podStartE2EDuration="10.029756448s" podCreationTimestamp="2025-10-14 15:12:05 +0000 UTC" firstStartedPulling="2025-10-14 15:12:06.446666195 +0000 UTC m=+1388.033449634" lastFinishedPulling="2025-10-14 15:12:14.479882273 +0000 UTC m=+1396.066665722" observedRunningTime="2025-10-14 15:12:15.02696207 +0000 UTC m=+1396.613745519" watchObservedRunningTime="2025-10-14 15:12:15.029756448 +0000 UTC m=+1396.616539897" Oct 14 15:12:18 crc kubenswrapper[4860]: I1014 15:12:18.640764 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:12:22 crc kubenswrapper[4860]: I1014 15:12:21.999566 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.949626 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980046 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-log-httpd\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980223 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-run-httpd\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980267 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-sg-core-conf-yaml\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980316 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-scripts\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980374 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdrmt\" (UniqueName: \"kubernetes.io/projected/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-kube-api-access-rdrmt\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980407 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-config-data\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.980449 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-combined-ca-bundle\") pod \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\" (UID: \"c6549e8e-cc35-4bdc-87dc-e2f924805bc9\") " Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.982011 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.982223 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:12:26 crc kubenswrapper[4860]: I1014 15:12:26.988212 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-kube-api-access-rdrmt" (OuterVolumeSpecName: "kube-api-access-rdrmt") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "kube-api-access-rdrmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.010342 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-scripts" (OuterVolumeSpecName: "scripts") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.056696 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.082500 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.082525 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.082536 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.082545 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.082553 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdrmt\" (UniqueName: \"kubernetes.io/projected/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-kube-api-access-rdrmt\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.098746 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.117156 4860 generic.go:334] "Generic (PLEG): container finished" podID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerID="74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b" exitCode=137 Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.117203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerDied","Data":"74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b"} Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.117233 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6549e8e-cc35-4bdc-87dc-e2f924805bc9","Type":"ContainerDied","Data":"cac5c9fd7ad1e6ed51afccfe4b6b2657fd2b06e2c091eb998ed5b1c608ebffa7"} Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.117255 4860 scope.go:117] "RemoveContainer" containerID="74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.117413 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.122013 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-config-data" (OuterVolumeSpecName: "config-data") pod "c6549e8e-cc35-4bdc-87dc-e2f924805bc9" (UID: "c6549e8e-cc35-4bdc-87dc-e2f924805bc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.142823 4860 scope.go:117] "RemoveContainer" containerID="899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.167622 4860 scope.go:117] "RemoveContainer" containerID="7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.190179 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.190212 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6549e8e-cc35-4bdc-87dc-e2f924805bc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.193252 4860 scope.go:117] "RemoveContainer" containerID="2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.216195 4860 scope.go:117] "RemoveContainer" containerID="74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.216665 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b\": container with ID starting with 74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b not found: ID does not exist" containerID="74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.216719 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b"} err="failed to get container status \"74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b\": rpc error: code = NotFound desc = could not find container \"74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b\": container with ID starting with 74df860add8df6f60933def0fc39422a7325f5a602a73fac22bb0b41b02c8b1b not found: ID does not exist" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.216752 4860 scope.go:117] "RemoveContainer" containerID="899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.217186 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498\": container with ID starting with 899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498 not found: ID does not exist" containerID="899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.217218 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498"} err="failed to get container status \"899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498\": rpc error: code = NotFound desc = could not find container \"899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498\": container with ID starting with 899eb93b9c34427ebabbe5d20d99d6226028f574bbdbb381f3b9091479ab9498 not found: ID does not exist" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.217241 4860 scope.go:117] "RemoveContainer" containerID="7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.217494 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008\": container with ID starting with 7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008 not found: ID does not exist" containerID="7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.217872 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008"} err="failed to get container status \"7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008\": rpc error: code = NotFound desc = could not find container \"7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008\": container with ID starting with 7422277650ec60d77215372fcdcdc35e7eef7a65476cd131b861c865dfb81008 not found: ID does not exist" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.217889 4860 scope.go:117] "RemoveContainer" containerID="2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.218143 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac\": container with ID starting with 2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac not found: ID does not exist" containerID="2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.218168 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac"} err="failed to get container status \"2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac\": rpc error: code = NotFound desc = could not find container \"2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac\": container with ID starting with 2ff58b86b125a98e41f5c2beb9a6324438b0b55d5fdfc3977b281c87226565ac not found: ID does not exist" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.459380 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.470817 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491004 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.491464 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-notification-agent" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491493 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-notification-agent" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.491539 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="sg-core" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491548 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="sg-core" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.491564 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="proxy-httpd" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491571 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="proxy-httpd" Oct 14 15:12:27 crc kubenswrapper[4860]: E1014 15:12:27.491588 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-central-agent" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491598 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-central-agent" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491816 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-central-agent" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491841 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="sg-core" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491869 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="ceilometer-notification-agent" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.491887 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" containerName="proxy-httpd" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.494012 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.495824 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.496049 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.517308 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.598625 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-run-httpd\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.598725 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vw2\" (UniqueName: \"kubernetes.io/projected/428069e3-797e-47db-b53e-565cf5a366bd-kube-api-access-b5vw2\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.598792 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-config-data\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.598895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.598958 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-scripts\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.598986 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-log-httpd\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.599054 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.700890 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5vw2\" (UniqueName: \"kubernetes.io/projected/428069e3-797e-47db-b53e-565cf5a366bd-kube-api-access-b5vw2\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.700959 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-config-data\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.701001 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.701070 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-scripts\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.701097 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-log-httpd\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.701141 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.701211 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-run-httpd\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.701704 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-run-httpd\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.702084 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-log-httpd\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.705703 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-config-data\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.706176 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-scripts\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.707473 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.719116 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.721435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5vw2\" (UniqueName: \"kubernetes.io/projected/428069e3-797e-47db-b53e-565cf5a366bd-kube-api-access-b5vw2\") pod \"ceilometer-0\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " pod="openstack/ceilometer-0" Oct 14 15:12:27 crc kubenswrapper[4860]: I1014 15:12:27.810984 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:12:28 crc kubenswrapper[4860]: I1014 15:12:28.157824 4860 generic.go:334] "Generic (PLEG): container finished" podID="3a0a3f5b-875c-49b4-8649-ed231cbb71c0" containerID="6ad05a8e79f65e07a7c0435d2adc192c5d9aa1507b50627b41120ab66467bb0e" exitCode=0 Oct 14 15:12:28 crc kubenswrapper[4860]: I1014 15:12:28.158169 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" event={"ID":"3a0a3f5b-875c-49b4-8649-ed231cbb71c0","Type":"ContainerDied","Data":"6ad05a8e79f65e07a7c0435d2adc192c5d9aa1507b50627b41120ab66467bb0e"} Oct 14 15:12:28 crc kubenswrapper[4860]: I1014 15:12:28.295633 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:12:28 crc kubenswrapper[4860]: W1014 15:12:28.296401 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5 WatchSource:0}: Error finding container 8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5: Status 404 returned error can't find the container with id 8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5 Oct 14 15:12:28 crc kubenswrapper[4860]: I1014 15:12:28.640759 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd7969c76-f8cq5" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.081185 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6549e8e-cc35-4bdc-87dc-e2f924805bc9" path="/var/lib/kubelet/pods/c6549e8e-cc35-4bdc-87dc-e2f924805bc9/volumes" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.218313 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerStarted","Data":"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac"} Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.218619 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerStarted","Data":"8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5"} Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.722322 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.848412 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vzlx\" (UniqueName: \"kubernetes.io/projected/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-kube-api-access-6vzlx\") pod \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.848464 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-scripts\") pod \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.848546 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-config-data\") pod \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.849190 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-combined-ca-bundle\") pod \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\" (UID: \"3a0a3f5b-875c-49b4-8649-ed231cbb71c0\") " Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.853069 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-kube-api-access-6vzlx" (OuterVolumeSpecName: "kube-api-access-6vzlx") pod "3a0a3f5b-875c-49b4-8649-ed231cbb71c0" (UID: "3a0a3f5b-875c-49b4-8649-ed231cbb71c0"). InnerVolumeSpecName "kube-api-access-6vzlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.856120 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-scripts" (OuterVolumeSpecName: "scripts") pod "3a0a3f5b-875c-49b4-8649-ed231cbb71c0" (UID: "3a0a3f5b-875c-49b4-8649-ed231cbb71c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.901219 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a0a3f5b-875c-49b4-8649-ed231cbb71c0" (UID: "3a0a3f5b-875c-49b4-8649-ed231cbb71c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.929079 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-config-data" (OuterVolumeSpecName: "config-data") pod "3a0a3f5b-875c-49b4-8649-ed231cbb71c0" (UID: "3a0a3f5b-875c-49b4-8649-ed231cbb71c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.952211 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.952251 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vzlx\" (UniqueName: \"kubernetes.io/projected/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-kube-api-access-6vzlx\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.952264 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:29 crc kubenswrapper[4860]: I1014 15:12:29.952273 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0a3f5b-875c-49b4-8649-ed231cbb71c0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.233382 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" event={"ID":"3a0a3f5b-875c-49b4-8649-ed231cbb71c0","Type":"ContainerDied","Data":"ddd0ac6086436c2c766e85f0374a2159bf8dc54ed83fdccb133a243be21b25de"} Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.233430 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd0ac6086436c2c766e85f0374a2159bf8dc54ed83fdccb133a243be21b25de" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.233489 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvtwm" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.251844 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerStarted","Data":"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81"} Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.338134 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 15:12:30 crc kubenswrapper[4860]: E1014 15:12:30.338627 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0a3f5b-875c-49b4-8649-ed231cbb71c0" containerName="nova-cell0-conductor-db-sync" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.338652 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0a3f5b-875c-49b4-8649-ed231cbb71c0" containerName="nova-cell0-conductor-db-sync" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.338937 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0a3f5b-875c-49b4-8649-ed231cbb71c0" containerName="nova-cell0-conductor-db-sync" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.339677 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.348489 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.348679 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftthk" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.355079 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.461994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx4z\" (UniqueName: \"kubernetes.io/projected/0dd800a1-57e1-4a3b-994b-304c941b9e5e-kube-api-access-kvx4z\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.462155 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd800a1-57e1-4a3b-994b-304c941b9e5e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.463394 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd800a1-57e1-4a3b-994b-304c941b9e5e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.564825 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx4z\" (UniqueName: \"kubernetes.io/projected/0dd800a1-57e1-4a3b-994b-304c941b9e5e-kube-api-access-kvx4z\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.564923 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd800a1-57e1-4a3b-994b-304c941b9e5e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.564982 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd800a1-57e1-4a3b-994b-304c941b9e5e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.571759 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd800a1-57e1-4a3b-994b-304c941b9e5e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.576670 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd800a1-57e1-4a3b-994b-304c941b9e5e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.586691 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx4z\" (UniqueName: \"kubernetes.io/projected/0dd800a1-57e1-4a3b-994b-304c941b9e5e-kube-api-access-kvx4z\") pod \"nova-cell0-conductor-0\" (UID: \"0dd800a1-57e1-4a3b-994b-304c941b9e5e\") " pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:30 crc kubenswrapper[4860]: I1014 15:12:30.708988 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:31 crc kubenswrapper[4860]: I1014 15:12:31.225728 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 15:12:31 crc kubenswrapper[4860]: W1014 15:12:31.240251 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dd800a1_57e1_4a3b_994b_304c941b9e5e.slice/crio-addbdf65882f2fcc587f0a206e1fdf2ed1798865ccc7f161e31975392a4df24c WatchSource:0}: Error finding container addbdf65882f2fcc587f0a206e1fdf2ed1798865ccc7f161e31975392a4df24c: Status 404 returned error can't find the container with id addbdf65882f2fcc587f0a206e1fdf2ed1798865ccc7f161e31975392a4df24c Oct 14 15:12:31 crc kubenswrapper[4860]: I1014 15:12:31.272538 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerStarted","Data":"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294"} Oct 14 15:12:31 crc kubenswrapper[4860]: I1014 15:12:31.276584 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0dd800a1-57e1-4a3b-994b-304c941b9e5e","Type":"ContainerStarted","Data":"addbdf65882f2fcc587f0a206e1fdf2ed1798865ccc7f161e31975392a4df24c"} Oct 14 15:12:32 crc kubenswrapper[4860]: I1014 15:12:32.285472 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0dd800a1-57e1-4a3b-994b-304c941b9e5e","Type":"ContainerStarted","Data":"03105a61a8435fce1a62054632a2c8a14cb8c062c06264628f7b94d473a54699"} Oct 14 15:12:32 crc kubenswrapper[4860]: I1014 15:12:32.286092 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:32 crc kubenswrapper[4860]: I1014 15:12:32.313250 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.313232549 podStartE2EDuration="2.313232549s" podCreationTimestamp="2025-10-14 15:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:12:32.303153825 +0000 UTC m=+1413.889937274" watchObservedRunningTime="2025-10-14 15:12:32.313232549 +0000 UTC m=+1413.900015998" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.104644 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.241790 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-config-data\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.242203 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59fdcc0-928b-485d-a66b-450a1d1d76f4-logs\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.242316 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slscq\" (UniqueName: \"kubernetes.io/projected/e59fdcc0-928b-485d-a66b-450a1d1d76f4-kube-api-access-slscq\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.242434 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-scripts\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.242633 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-combined-ca-bundle\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.242910 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-tls-certs\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.243627 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-secret-key\") pod \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\" (UID: \"e59fdcc0-928b-485d-a66b-450a1d1d76f4\") " Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.243973 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59fdcc0-928b-485d-a66b-450a1d1d76f4-logs" (OuterVolumeSpecName: "logs") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.244466 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59fdcc0-928b-485d-a66b-450a1d1d76f4-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.250317 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.254291 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59fdcc0-928b-485d-a66b-450a1d1d76f4-kube-api-access-slscq" (OuterVolumeSpecName: "kube-api-access-slscq") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "kube-api-access-slscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.265936 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-scripts" (OuterVolumeSpecName: "scripts") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.271340 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-config-data" (OuterVolumeSpecName: "config-data") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.283199 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.317797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e59fdcc0-928b-485d-a66b-450a1d1d76f4" (UID: "e59fdcc0-928b-485d-a66b-450a1d1d76f4"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.320060 4860 generic.go:334] "Generic (PLEG): container finished" podID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerID="c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c" exitCode=137 Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.320360 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd7969c76-f8cq5" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.320584 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerDied","Data":"c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c"} Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.320619 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd7969c76-f8cq5" event={"ID":"e59fdcc0-928b-485d-a66b-450a1d1d76f4","Type":"ContainerDied","Data":"b7c9f56039bb2f2a71244fb435ca7d7bffc834ba209822873fe2f88dcc9cb5f7"} Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.320653 4860 scope.go:117] "RemoveContainer" containerID="bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.346042 4860 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.346368 4860 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.346477 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.346559 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slscq\" (UniqueName: \"kubernetes.io/projected/e59fdcc0-928b-485d-a66b-450a1d1d76f4-kube-api-access-slscq\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.346640 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e59fdcc0-928b-485d-a66b-450a1d1d76f4-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.346726 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59fdcc0-928b-485d-a66b-450a1d1d76f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.374690 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd7969c76-f8cq5"] Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.383535 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7dd7969c76-f8cq5"] Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.501541 4860 scope.go:117] "RemoveContainer" containerID="c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.523638 4860 scope.go:117] "RemoveContainer" containerID="bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249" Oct 14 15:12:35 crc kubenswrapper[4860]: E1014 15:12:35.524094 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249\": container with ID starting with bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249 not found: ID does not exist" containerID="bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.524147 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249"} err="failed to get container status \"bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249\": rpc error: code = NotFound desc = could not find container \"bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249\": container with ID starting with bd18509ad5611c5c1fa10197f2c020ce51fe5885318f688c39d88d3c9eb96249 not found: ID does not exist" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.524173 4860 scope.go:117] "RemoveContainer" containerID="c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c" Oct 14 15:12:35 crc kubenswrapper[4860]: E1014 15:12:35.524588 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c\": container with ID starting with c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c not found: ID does not exist" containerID="c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c" Oct 14 15:12:35 crc kubenswrapper[4860]: I1014 15:12:35.524622 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c"} err="failed to get container status \"c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c\": rpc error: code = NotFound desc = could not find container \"c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c\": container with ID starting with c0475b19ac764863a4f2450bff029c0c7ec4b25661f0aa2940b7727fb8b0f16c not found: ID does not exist" Oct 14 15:12:36 crc kubenswrapper[4860]: I1014 15:12:36.332844 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerStarted","Data":"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634"} Oct 14 15:12:36 crc kubenswrapper[4860]: I1014 15:12:36.333316 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 15:12:36 crc kubenswrapper[4860]: I1014 15:12:36.361375 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.447331031 podStartE2EDuration="9.36135187s" podCreationTimestamp="2025-10-14 15:12:27 +0000 UTC" firstStartedPulling="2025-10-14 15:12:28.298633639 +0000 UTC m=+1409.885417088" lastFinishedPulling="2025-10-14 15:12:35.212654468 +0000 UTC m=+1416.799437927" observedRunningTime="2025-10-14 15:12:36.352864255 +0000 UTC m=+1417.939647724" watchObservedRunningTime="2025-10-14 15:12:36.36135187 +0000 UTC m=+1417.948135319" Oct 14 15:12:37 crc kubenswrapper[4860]: I1014 15:12:37.074678 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" path="/var/lib/kubelet/pods/e59fdcc0-928b-485d-a66b-450a1d1d76f4/volumes" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.839693 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sskvp"] Oct 14 15:12:39 crc kubenswrapper[4860]: E1014 15:12:39.840415 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.840428 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" Oct 14 15:12:39 crc kubenswrapper[4860]: E1014 15:12:39.840443 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon-log" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.840448 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon-log" Oct 14 15:12:39 crc kubenswrapper[4860]: E1014 15:12:39.840465 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.840471 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.840636 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon-log" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.840651 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.840668 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59fdcc0-928b-485d-a66b-450a1d1d76f4" containerName="horizon" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.841949 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.855757 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sskvp"] Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.942046 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-catalog-content\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.942192 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcr4s\" (UniqueName: \"kubernetes.io/projected/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-kube-api-access-jcr4s\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:39 crc kubenswrapper[4860]: I1014 15:12:39.942248 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-utilities\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.043455 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-utilities\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.043530 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-catalog-content\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.043635 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcr4s\" (UniqueName: \"kubernetes.io/projected/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-kube-api-access-jcr4s\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.044382 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-utilities\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.044603 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-catalog-content\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.063104 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcr4s\" (UniqueName: \"kubernetes.io/projected/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-kube-api-access-jcr4s\") pod \"community-operators-sskvp\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.170642 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.627535 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sskvp"] Oct 14 15:12:40 crc kubenswrapper[4860]: I1014 15:12:40.751127 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.247261 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9rxm7"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.248643 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.250138 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.264611 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.278101 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rxm7"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.367505 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzkv\" (UniqueName: \"kubernetes.io/projected/d17051e3-47fc-4f95-8442-3ff6327fadf7-kube-api-access-fgzkv\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.367566 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.367602 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-scripts\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.367636 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-config-data\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.387936 4860 generic.go:334] "Generic (PLEG): container finished" podID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerID="392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12" exitCode=0 Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.388444 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerDied","Data":"392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12"} Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.388520 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerStarted","Data":"be70dfc50eba5eabacb61329ba933b866e1f91f9c1681421fa8aceac02c79a1d"} Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.470143 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzkv\" (UniqueName: \"kubernetes.io/projected/d17051e3-47fc-4f95-8442-3ff6327fadf7-kube-api-access-fgzkv\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.470201 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.470236 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-scripts\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.470270 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-config-data\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.493826 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-config-data\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.507787 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.508296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-scripts\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.527745 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzkv\" (UniqueName: \"kubernetes.io/projected/d17051e3-47fc-4f95-8442-3ff6327fadf7-kube-api-access-fgzkv\") pod \"nova-cell0-cell-mapping-9rxm7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.564515 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.600374 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.601976 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.606605 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.625162 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.645082 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.646982 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.650846 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.683624 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.690548 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.697092 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.743271 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788274 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-kube-api-access-4lgp8\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788373 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-config-data\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788502 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788575 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tvj\" (UniqueName: \"kubernetes.io/projected/1aad2102-2a68-40bf-9509-8ca72c8cb48a-kube-api-access-24tvj\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28f6w\" (UniqueName: \"kubernetes.io/projected/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-kube-api-access-28f6w\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788671 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-config-data\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788694 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-logs\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788725 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788739 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-logs\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788755 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.788852 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.829199 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kcm5k"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.830947 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.858430 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.890894 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.890950 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24tvj\" (UniqueName: \"kubernetes.io/projected/1aad2102-2a68-40bf-9509-8ca72c8cb48a-kube-api-access-24tvj\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.890987 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28f6w\" (UniqueName: \"kubernetes.io/projected/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-kube-api-access-28f6w\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891014 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-config-data\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891047 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-logs\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891067 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891080 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-logs\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891095 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891151 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891194 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-kube-api-access-4lgp8\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.891219 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-config-data\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.896445 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-logs\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.922128 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-logs\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.942935 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28f6w\" (UniqueName: \"kubernetes.io/projected/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-kube-api-access-28f6w\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.943407 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.943571 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.943644 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kcm5k"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.943997 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.945721 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-config-data\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.946840 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24tvj\" (UniqueName: \"kubernetes.io/projected/1aad2102-2a68-40bf-9509-8ca72c8cb48a-kube-api-access-24tvj\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.954945 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.957091 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-config-data\") pod \"nova-api-0\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.966868 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-kube-api-access-4lgp8\") pod \"nova-metadata-0\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " pod="openstack/nova-metadata-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.978557 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.993218 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-config\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.993265 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.993331 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.993360 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-svc\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.993386 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqwl8\" (UniqueName: \"kubernetes.io/projected/0fc82d3c-afa8-4c2a-9e49-531b56497332-kube-api-access-dqwl8\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.993428 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.995173 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:12:41 crc kubenswrapper[4860]: I1014 15:12:41.996302 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.001141 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.021514 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095015 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-svc\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095120 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqwl8\" (UniqueName: \"kubernetes.io/projected/0fc82d3c-afa8-4c2a-9e49-531b56497332-kube-api-access-dqwl8\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095140 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgx4d\" (UniqueName: \"kubernetes.io/projected/1dfc2e67-8f9a-4dd8-bae4-026923959816-kube-api-access-pgx4d\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095184 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095232 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095262 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-config\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095284 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.095299 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.096208 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.096731 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-svc\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.097828 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.098426 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.098598 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.099403 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-config\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.116542 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.119376 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqwl8\" (UniqueName: \"kubernetes.io/projected/0fc82d3c-afa8-4c2a-9e49-531b56497332-kube-api-access-dqwl8\") pod \"dnsmasq-dns-757b4f8459-kcm5k\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.153770 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.212346 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.213201 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.213479 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgx4d\" (UniqueName: \"kubernetes.io/projected/1dfc2e67-8f9a-4dd8-bae4-026923959816-kube-api-access-pgx4d\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.223688 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.226190 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.234056 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgx4d\" (UniqueName: \"kubernetes.io/projected/1dfc2e67-8f9a-4dd8-bae4-026923959816-kube-api-access-pgx4d\") pod \"nova-scheduler-0\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.334587 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.360067 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rxm7"] Oct 14 15:12:42 crc kubenswrapper[4860]: W1014 15:12:42.372290 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd17051e3_47fc_4f95_8442_3ff6327fadf7.slice/crio-717e4c7426737be5cbf1b5a7cdfdcbb8b388ff94602a567e95a0b8b126b13ae3 WatchSource:0}: Error finding container 717e4c7426737be5cbf1b5a7cdfdcbb8b388ff94602a567e95a0b8b126b13ae3: Status 404 returned error can't find the container with id 717e4c7426737be5cbf1b5a7cdfdcbb8b388ff94602a567e95a0b8b126b13ae3 Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.493138 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rxm7" event={"ID":"d17051e3-47fc-4f95-8442-3ff6327fadf7","Type":"ContainerStarted","Data":"717e4c7426737be5cbf1b5a7cdfdcbb8b388ff94602a567e95a0b8b126b13ae3"} Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.590955 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.854286 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:12:42 crc kubenswrapper[4860]: W1014 15:12:42.870506 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ff86ff_9bf0_4ec8_b801_40fc71e53742.slice/crio-74f7cf205f0b972ef75bfd31326c11468a0de94409b88c6ad4801e0e4e1094f9 WatchSource:0}: Error finding container 74f7cf205f0b972ef75bfd31326c11468a0de94409b88c6ad4801e0e4e1094f9: Status 404 returned error can't find the container with id 74f7cf205f0b972ef75bfd31326c11468a0de94409b88c6ad4801e0e4e1094f9 Oct 14 15:12:42 crc kubenswrapper[4860]: I1014 15:12:42.943364 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kcm5k"] Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.032579 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-svrr2"] Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.035094 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.053495 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-svrr2"] Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.053903 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.056709 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.141478 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-scripts\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.141564 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-config-data\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.141635 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.141680 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfzj\" (UniqueName: \"kubernetes.io/projected/4c0c8cbd-2256-4261-9bf5-a62952d239b4-kube-api-access-wzfzj\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.162075 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.243334 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfzj\" (UniqueName: \"kubernetes.io/projected/4c0c8cbd-2256-4261-9bf5-a62952d239b4-kube-api-access-wzfzj\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.243673 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-scripts\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.243772 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-config-data\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.243872 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.254830 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.256523 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-scripts\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.262244 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfzj\" (UniqueName: \"kubernetes.io/projected/4c0c8cbd-2256-4261-9bf5-a62952d239b4-kube-api-access-wzfzj\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.264809 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-config-data\") pod \"nova-cell1-conductor-db-sync-svrr2\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.330241 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.413860 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.592598 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerStarted","Data":"ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.594890 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585","Type":"ContainerStarted","Data":"aa7f6e1623d5668a57c5f55acdc51e1706c053e4d66d926268f5714990146bab"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.597643 4860 generic.go:334] "Generic (PLEG): container finished" podID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerID="4ad272f4e561fdf43ca32ae9550a8d6c56af7658ba6b7956199a896f4a27fb59" exitCode=0 Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.597689 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" event={"ID":"0fc82d3c-afa8-4c2a-9e49-531b56497332","Type":"ContainerDied","Data":"4ad272f4e561fdf43ca32ae9550a8d6c56af7658ba6b7956199a896f4a27fb59"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.597706 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" event={"ID":"0fc82d3c-afa8-4c2a-9e49-531b56497332","Type":"ContainerStarted","Data":"c23716352a7abd96113f39bb7a7d705cf2ce1121c6205056fd385856d24b9695"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.615308 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9ff86ff-9bf0-4ec8-b801-40fc71e53742","Type":"ContainerStarted","Data":"74f7cf205f0b972ef75bfd31326c11468a0de94409b88c6ad4801e0e4e1094f9"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.654475 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad2102-2a68-40bf-9509-8ca72c8cb48a","Type":"ContainerStarted","Data":"d73eb10d78f8cd368c14ad29a2c32a91b3cc056820e0fc4064c0cd3b3873a8b1"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.664423 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc2e67-8f9a-4dd8-bae4-026923959816","Type":"ContainerStarted","Data":"78d028b28528f5a98982f0f09bf12429b33d6abc1535d4671e98391d57cfc8b0"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.685518 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rxm7" event={"ID":"d17051e3-47fc-4f95-8442-3ff6327fadf7","Type":"ContainerStarted","Data":"805c9a11570b79e1fefdae7d7c88096c58886655c8ef02bbaef2390a66ce2b30"} Oct 14 15:12:43 crc kubenswrapper[4860]: I1014 15:12:43.806780 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9rxm7" podStartSLOduration=2.806761227 podStartE2EDuration="2.806761227s" podCreationTimestamp="2025-10-14 15:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:12:43.759290189 +0000 UTC m=+1425.346073638" watchObservedRunningTime="2025-10-14 15:12:43.806761227 +0000 UTC m=+1425.393544676" Oct 14 15:12:44 crc kubenswrapper[4860]: I1014 15:12:44.487879 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-svrr2"] Oct 14 15:12:44 crc kubenswrapper[4860]: I1014 15:12:44.700179 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-svrr2" event={"ID":"4c0c8cbd-2256-4261-9bf5-a62952d239b4","Type":"ContainerStarted","Data":"e8d437b53eb7969ef43a4dbff9692321cd093baa22e145bf3923a9892e0b6797"} Oct 14 15:12:44 crc kubenswrapper[4860]: I1014 15:12:44.703458 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" event={"ID":"0fc82d3c-afa8-4c2a-9e49-531b56497332","Type":"ContainerStarted","Data":"20c33a589d90cf004180ea32861dc77fec10ea1b083bf4b0ab8dc6fc7440e913"} Oct 14 15:12:44 crc kubenswrapper[4860]: I1014 15:12:44.724998 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" podStartSLOduration=3.724976437 podStartE2EDuration="3.724976437s" podCreationTimestamp="2025-10-14 15:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:12:44.721365789 +0000 UTC m=+1426.308149258" watchObservedRunningTime="2025-10-14 15:12:44.724976437 +0000 UTC m=+1426.311759886" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.708620 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shll4"] Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.750428 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.786164 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shll4"] Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.805422 4860 generic.go:334] "Generic (PLEG): container finished" podID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerID="ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59" exitCode=0 Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.805515 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerDied","Data":"ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59"} Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.809840 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-svrr2" event={"ID":"4c0c8cbd-2256-4261-9bf5-a62952d239b4","Type":"ContainerStarted","Data":"8f4946dea223cabde57677246514f3688d78e686b5c2061b1d6b8dc08b54640e"} Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.809885 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.816557 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-catalog-content\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.816633 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgt2\" (UniqueName: \"kubernetes.io/projected/59dce401-ce86-4798-a1ef-6a520c406f54-kube-api-access-glgt2\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.816672 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-utilities\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.851104 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-svrr2" podStartSLOduration=3.851065032 podStartE2EDuration="3.851065032s" podCreationTimestamp="2025-10-14 15:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:12:45.842426803 +0000 UTC m=+1427.429210252" watchObservedRunningTime="2025-10-14 15:12:45.851065032 +0000 UTC m=+1427.437848481" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.918500 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-catalog-content\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.918610 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgt2\" (UniqueName: \"kubernetes.io/projected/59dce401-ce86-4798-a1ef-6a520c406f54-kube-api-access-glgt2\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.918749 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-utilities\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.919843 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-catalog-content\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.923346 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-utilities\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:45 crc kubenswrapper[4860]: I1014 15:12:45.954962 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgt2\" (UniqueName: \"kubernetes.io/projected/59dce401-ce86-4798-a1ef-6a520c406f54-kube-api-access-glgt2\") pod \"redhat-operators-shll4\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:46 crc kubenswrapper[4860]: I1014 15:12:46.107102 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:12:46 crc kubenswrapper[4860]: I1014 15:12:46.842884 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:12:46 crc kubenswrapper[4860]: I1014 15:12:46.864970 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.342050 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shll4"] Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.860304 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerStarted","Data":"4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.880852 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585","Type":"ContainerStarted","Data":"1005593f4f4a0c8650e91b89fbefd21aba62a3b1df7aa7f36deb9f33ed947509"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.880890 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585","Type":"ContainerStarted","Data":"d05c224a109f3ff601802367bb39e4c39f953f77d4ad7e2f0ed5ad5e2d14371f"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.886600 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9ff86ff-9bf0-4ec8-b801-40fc71e53742","Type":"ContainerStarted","Data":"e4af3749f69dc7206fce76cc17289230ab9bd5713394fc5742b9a9e1d718a2e9"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.886651 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9ff86ff-9bf0-4ec8-b801-40fc71e53742","Type":"ContainerStarted","Data":"f43ede1457a38f9a73e73e0feb5dd3eeb9cf6d3d98552faffd1e3f9786acb456"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.886801 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-log" containerID="cri-o://f43ede1457a38f9a73e73e0feb5dd3eeb9cf6d3d98552faffd1e3f9786acb456" gracePeriod=30 Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.886919 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-metadata" containerID="cri-o://e4af3749f69dc7206fce76cc17289230ab9bd5713394fc5742b9a9e1d718a2e9" gracePeriod=30 Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.887841 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sskvp" podStartSLOduration=3.429477269 podStartE2EDuration="9.887825452s" podCreationTimestamp="2025-10-14 15:12:39 +0000 UTC" firstStartedPulling="2025-10-14 15:12:41.389792322 +0000 UTC m=+1422.976575781" lastFinishedPulling="2025-10-14 15:12:47.848140515 +0000 UTC m=+1429.434923964" observedRunningTime="2025-10-14 15:12:48.881282293 +0000 UTC m=+1430.468065752" watchObservedRunningTime="2025-10-14 15:12:48.887825452 +0000 UTC m=+1430.474608901" Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.900645 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad2102-2a68-40bf-9509-8ca72c8cb48a","Type":"ContainerStarted","Data":"1cfc687799c01f4b994d8f30fbefeddbd778f82c956d03661db83b21df08b223"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.900779 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1aad2102-2a68-40bf-9509-8ca72c8cb48a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1cfc687799c01f4b994d8f30fbefeddbd778f82c956d03661db83b21df08b223" gracePeriod=30 Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.905152 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc2e67-8f9a-4dd8-bae4-026923959816","Type":"ContainerStarted","Data":"36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.915886 4860 generic.go:334] "Generic (PLEG): container finished" podID="59dce401-ce86-4798-a1ef-6a520c406f54" containerID="f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11" exitCode=0 Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.915935 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerDied","Data":"f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.915962 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerStarted","Data":"f09be32998cd1ebd4762c34e5738ffee79df3f890d7df52a3916dced89a0f9ae"} Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.917586 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.746707635 podStartE2EDuration="7.91756468s" podCreationTimestamp="2025-10-14 15:12:41 +0000 UTC" firstStartedPulling="2025-10-14 15:12:42.63716519 +0000 UTC m=+1424.223948639" lastFinishedPulling="2025-10-14 15:12:47.808022235 +0000 UTC m=+1429.394805684" observedRunningTime="2025-10-14 15:12:48.916595618 +0000 UTC m=+1430.503379067" watchObservedRunningTime="2025-10-14 15:12:48.91756468 +0000 UTC m=+1430.504348129" Oct 14 15:12:48 crc kubenswrapper[4860]: I1014 15:12:48.954870 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.5750172620000003 podStartE2EDuration="7.954848372s" podCreationTimestamp="2025-10-14 15:12:41 +0000 UTC" firstStartedPulling="2025-10-14 15:12:43.35313049 +0000 UTC m=+1424.939913939" lastFinishedPulling="2025-10-14 15:12:47.73296159 +0000 UTC m=+1429.319745049" observedRunningTime="2025-10-14 15:12:48.939687555 +0000 UTC m=+1430.526471004" watchObservedRunningTime="2025-10-14 15:12:48.954848372 +0000 UTC m=+1430.541631821" Oct 14 15:12:49 crc kubenswrapper[4860]: I1014 15:12:49.004764 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.3373263939999998 podStartE2EDuration="8.004740818s" podCreationTimestamp="2025-10-14 15:12:41 +0000 UTC" firstStartedPulling="2025-10-14 15:12:43.178852346 +0000 UTC m=+1424.765635785" lastFinishedPulling="2025-10-14 15:12:47.84626676 +0000 UTC m=+1429.433050209" observedRunningTime="2025-10-14 15:12:48.990798641 +0000 UTC m=+1430.577582100" watchObservedRunningTime="2025-10-14 15:12:49.004740818 +0000 UTC m=+1430.591524267" Oct 14 15:12:49 crc kubenswrapper[4860]: I1014 15:12:49.076960 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9927286029999998 podStartE2EDuration="8.076940054s" podCreationTimestamp="2025-10-14 15:12:41 +0000 UTC" firstStartedPulling="2025-10-14 15:12:42.879113269 +0000 UTC m=+1424.465896718" lastFinishedPulling="2025-10-14 15:12:47.96332472 +0000 UTC m=+1429.550108169" observedRunningTime="2025-10-14 15:12:49.063626572 +0000 UTC m=+1430.650410021" watchObservedRunningTime="2025-10-14 15:12:49.076940054 +0000 UTC m=+1430.663723503" Oct 14 15:12:49 crc kubenswrapper[4860]: I1014 15:12:49.927888 4860 generic.go:334] "Generic (PLEG): container finished" podID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerID="f43ede1457a38f9a73e73e0feb5dd3eeb9cf6d3d98552faffd1e3f9786acb456" exitCode=143 Oct 14 15:12:49 crc kubenswrapper[4860]: I1014 15:12:49.928079 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9ff86ff-9bf0-4ec8-b801-40fc71e53742","Type":"ContainerDied","Data":"f43ede1457a38f9a73e73e0feb5dd3eeb9cf6d3d98552faffd1e3f9786acb456"} Oct 14 15:12:50 crc kubenswrapper[4860]: I1014 15:12:50.171463 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:50 crc kubenswrapper[4860]: I1014 15:12:50.172498 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:12:50 crc kubenswrapper[4860]: I1014 15:12:50.939670 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerStarted","Data":"1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7"} Oct 14 15:12:51 crc kubenswrapper[4860]: I1014 15:12:51.232963 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sskvp" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="registry-server" probeResult="failure" output=< Oct 14 15:12:51 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:12:51 crc kubenswrapper[4860]: > Oct 14 15:12:51 crc kubenswrapper[4860]: I1014 15:12:51.985092 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:12:51 crc kubenswrapper[4860]: I1014 15:12:51.985151 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.099726 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.099789 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.117059 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.156191 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.241195 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vcwrc"] Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.247455 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" containerName="dnsmasq-dns" containerID="cri-o://723a35917212ce2f3c98a48b4513ea817cf3a3243e7e1b4038a688090840d044" gracePeriod=10 Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.335257 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.335319 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.384777 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.964933 4860 generic.go:334] "Generic (PLEG): container finished" podID="b0a64287-efcb-40a1-a986-7554e896bf83" containerID="723a35917212ce2f3c98a48b4513ea817cf3a3243e7e1b4038a688090840d044" exitCode=0 Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.965044 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" event={"ID":"b0a64287-efcb-40a1-a986-7554e896bf83","Type":"ContainerDied","Data":"723a35917212ce2f3c98a48b4513ea817cf3a3243e7e1b4038a688090840d044"} Oct 14 15:12:52 crc kubenswrapper[4860]: I1014 15:12:52.998962 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.068239 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.068611 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.367367 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.490844 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-sb\") pod \"b0a64287-efcb-40a1-a986-7554e896bf83\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.491000 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-config\") pod \"b0a64287-efcb-40a1-a986-7554e896bf83\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.491046 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwkhs\" (UniqueName: \"kubernetes.io/projected/b0a64287-efcb-40a1-a986-7554e896bf83-kube-api-access-mwkhs\") pod \"b0a64287-efcb-40a1-a986-7554e896bf83\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.491085 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-nb\") pod \"b0a64287-efcb-40a1-a986-7554e896bf83\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.491155 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-svc\") pod \"b0a64287-efcb-40a1-a986-7554e896bf83\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.491240 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-swift-storage-0\") pod \"b0a64287-efcb-40a1-a986-7554e896bf83\" (UID: \"b0a64287-efcb-40a1-a986-7554e896bf83\") " Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.515418 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a64287-efcb-40a1-a986-7554e896bf83-kube-api-access-mwkhs" (OuterVolumeSpecName: "kube-api-access-mwkhs") pod "b0a64287-efcb-40a1-a986-7554e896bf83" (UID: "b0a64287-efcb-40a1-a986-7554e896bf83"). InnerVolumeSpecName "kube-api-access-mwkhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.621747 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0a64287-efcb-40a1-a986-7554e896bf83" (UID: "b0a64287-efcb-40a1-a986-7554e896bf83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.624715 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.624775 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwkhs\" (UniqueName: \"kubernetes.io/projected/b0a64287-efcb-40a1-a986-7554e896bf83-kube-api-access-mwkhs\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.632306 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0a64287-efcb-40a1-a986-7554e896bf83" (UID: "b0a64287-efcb-40a1-a986-7554e896bf83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.681579 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-config" (OuterVolumeSpecName: "config") pod "b0a64287-efcb-40a1-a986-7554e896bf83" (UID: "b0a64287-efcb-40a1-a986-7554e896bf83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.693920 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0a64287-efcb-40a1-a986-7554e896bf83" (UID: "b0a64287-efcb-40a1-a986-7554e896bf83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.695316 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b0a64287-efcb-40a1-a986-7554e896bf83" (UID: "b0a64287-efcb-40a1-a986-7554e896bf83"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.727254 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.727294 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.727306 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.727315 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0a64287-efcb-40a1-a986-7554e896bf83-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.982229 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.985299 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vcwrc" event={"ID":"b0a64287-efcb-40a1-a986-7554e896bf83","Type":"ContainerDied","Data":"4970cb2f29a4f6f2bbb0ab0f9df4f62deda3b385ef30bd8b49d4b8552fa7e8dd"} Oct 14 15:12:53 crc kubenswrapper[4860]: I1014 15:12:53.985401 4860 scope.go:117] "RemoveContainer" containerID="723a35917212ce2f3c98a48b4513ea817cf3a3243e7e1b4038a688090840d044" Oct 14 15:12:54 crc kubenswrapper[4860]: I1014 15:12:54.020103 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vcwrc"] Oct 14 15:12:54 crc kubenswrapper[4860]: I1014 15:12:54.038183 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vcwrc"] Oct 14 15:12:54 crc kubenswrapper[4860]: I1014 15:12:54.044806 4860 scope.go:117] "RemoveContainer" containerID="a71397b13a9e603000cf02db5c94e223e11537b20dbe95d0b02220bdcec6e23f" Oct 14 15:12:55 crc kubenswrapper[4860]: I1014 15:12:55.072240 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" path="/var/lib/kubelet/pods/b0a64287-efcb-40a1-a986-7554e896bf83/volumes" Oct 14 15:12:56 crc kubenswrapper[4860]: I1014 15:12:56.005324 4860 generic.go:334] "Generic (PLEG): container finished" podID="59dce401-ce86-4798-a1ef-6a520c406f54" containerID="1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7" exitCode=0 Oct 14 15:12:56 crc kubenswrapper[4860]: I1014 15:12:56.005411 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerDied","Data":"1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7"} Oct 14 15:12:57 crc kubenswrapper[4860]: I1014 15:12:57.017485 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerStarted","Data":"2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46"} Oct 14 15:12:57 crc kubenswrapper[4860]: I1014 15:12:57.040651 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shll4" podStartSLOduration=4.220948165 podStartE2EDuration="12.040633662s" podCreationTimestamp="2025-10-14 15:12:45 +0000 UTC" firstStartedPulling="2025-10-14 15:12:48.919660161 +0000 UTC m=+1430.506443610" lastFinishedPulling="2025-10-14 15:12:56.739345658 +0000 UTC m=+1438.326129107" observedRunningTime="2025-10-14 15:12:57.039275359 +0000 UTC m=+1438.626058808" watchObservedRunningTime="2025-10-14 15:12:57.040633662 +0000 UTC m=+1438.627417111" Oct 14 15:12:57 crc kubenswrapper[4860]: I1014 15:12:57.825690 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 15:12:58 crc kubenswrapper[4860]: I1014 15:12:58.028342 4860 generic.go:334] "Generic (PLEG): container finished" podID="d17051e3-47fc-4f95-8442-3ff6327fadf7" containerID="805c9a11570b79e1fefdae7d7c88096c58886655c8ef02bbaef2390a66ce2b30" exitCode=0 Oct 14 15:12:58 crc kubenswrapper[4860]: I1014 15:12:58.028404 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rxm7" event={"ID":"d17051e3-47fc-4f95-8442-3ff6327fadf7","Type":"ContainerDied","Data":"805c9a11570b79e1fefdae7d7c88096c58886655c8ef02bbaef2390a66ce2b30"} Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.246456 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.246906 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.512995 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.651463 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-scripts\") pod \"d17051e3-47fc-4f95-8442-3ff6327fadf7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.651800 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgzkv\" (UniqueName: \"kubernetes.io/projected/d17051e3-47fc-4f95-8442-3ff6327fadf7-kube-api-access-fgzkv\") pod \"d17051e3-47fc-4f95-8442-3ff6327fadf7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.651933 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-combined-ca-bundle\") pod \"d17051e3-47fc-4f95-8442-3ff6327fadf7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.651964 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-config-data\") pod \"d17051e3-47fc-4f95-8442-3ff6327fadf7\" (UID: \"d17051e3-47fc-4f95-8442-3ff6327fadf7\") " Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.663223 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17051e3-47fc-4f95-8442-3ff6327fadf7-kube-api-access-fgzkv" (OuterVolumeSpecName: "kube-api-access-fgzkv") pod "d17051e3-47fc-4f95-8442-3ff6327fadf7" (UID: "d17051e3-47fc-4f95-8442-3ff6327fadf7"). InnerVolumeSpecName "kube-api-access-fgzkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.663398 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-scripts" (OuterVolumeSpecName: "scripts") pod "d17051e3-47fc-4f95-8442-3ff6327fadf7" (UID: "d17051e3-47fc-4f95-8442-3ff6327fadf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.696188 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-config-data" (OuterVolumeSpecName: "config-data") pod "d17051e3-47fc-4f95-8442-3ff6327fadf7" (UID: "d17051e3-47fc-4f95-8442-3ff6327fadf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.696281 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d17051e3-47fc-4f95-8442-3ff6327fadf7" (UID: "d17051e3-47fc-4f95-8442-3ff6327fadf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.754204 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.754234 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgzkv\" (UniqueName: \"kubernetes.io/projected/d17051e3-47fc-4f95-8442-3ff6327fadf7-kube-api-access-fgzkv\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.754247 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:12:59 crc kubenswrapper[4860]: I1014 15:12:59.754256 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17051e3-47fc-4f95-8442-3ff6327fadf7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.049824 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rxm7" event={"ID":"d17051e3-47fc-4f95-8442-3ff6327fadf7","Type":"ContainerDied","Data":"717e4c7426737be5cbf1b5a7cdfdcbb8b388ff94602a567e95a0b8b126b13ae3"} Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.049860 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="717e4c7426737be5cbf1b5a7cdfdcbb8b388ff94602a567e95a0b8b126b13ae3" Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.049865 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rxm7" Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.235833 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.236177 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-log" containerID="cri-o://d05c224a109f3ff601802367bb39e4c39f953f77d4ad7e2f0ed5ad5e2d14371f" gracePeriod=30 Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.236304 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-api" containerID="cri-o://1005593f4f4a0c8650e91b89fbefd21aba62a3b1df7aa7f36deb9f33ed947509" gracePeriod=30 Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.252092 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:00 crc kubenswrapper[4860]: I1014 15:13:00.252322 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1dfc2e67-8f9a-4dd8-bae4-026923959816" containerName="nova-scheduler-scheduler" containerID="cri-o://36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" gracePeriod=30 Oct 14 15:13:01 crc kubenswrapper[4860]: I1014 15:13:01.060459 4860 generic.go:334] "Generic (PLEG): container finished" podID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerID="d05c224a109f3ff601802367bb39e4c39f953f77d4ad7e2f0ed5ad5e2d14371f" exitCode=143 Oct 14 15:13:01 crc kubenswrapper[4860]: I1014 15:13:01.060541 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585","Type":"ContainerDied","Data":"d05c224a109f3ff601802367bb39e4c39f953f77d4ad7e2f0ed5ad5e2d14371f"} Oct 14 15:13:01 crc kubenswrapper[4860]: I1014 15:13:01.252729 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sskvp" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:01 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:01 crc kubenswrapper[4860]: > Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.070757 4860 generic.go:334] "Generic (PLEG): container finished" podID="4c0c8cbd-2256-4261-9bf5-a62952d239b4" containerID="8f4946dea223cabde57677246514f3688d78e686b5c2061b1d6b8dc08b54640e" exitCode=0 Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.070830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-svrr2" event={"ID":"4c0c8cbd-2256-4261-9bf5-a62952d239b4","Type":"ContainerDied","Data":"8f4946dea223cabde57677246514f3688d78e686b5c2061b1d6b8dc08b54640e"} Oct 14 15:13:02 crc kubenswrapper[4860]: E1014 15:13:02.336719 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195 is running failed: container process not found" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 15:13:02 crc kubenswrapper[4860]: E1014 15:13:02.337260 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195 is running failed: container process not found" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 15:13:02 crc kubenswrapper[4860]: E1014 15:13:02.337583 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195 is running failed: container process not found" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 15:13:02 crc kubenswrapper[4860]: E1014 15:13:02.337648 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1dfc2e67-8f9a-4dd8-bae4-026923959816" containerName="nova-scheduler-scheduler" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.760571 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.804827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-config-data\") pod \"1dfc2e67-8f9a-4dd8-bae4-026923959816\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.804887 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx4d\" (UniqueName: \"kubernetes.io/projected/1dfc2e67-8f9a-4dd8-bae4-026923959816-kube-api-access-pgx4d\") pod \"1dfc2e67-8f9a-4dd8-bae4-026923959816\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.804984 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-combined-ca-bundle\") pod \"1dfc2e67-8f9a-4dd8-bae4-026923959816\" (UID: \"1dfc2e67-8f9a-4dd8-bae4-026923959816\") " Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.821280 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfc2e67-8f9a-4dd8-bae4-026923959816-kube-api-access-pgx4d" (OuterVolumeSpecName: "kube-api-access-pgx4d") pod "1dfc2e67-8f9a-4dd8-bae4-026923959816" (UID: "1dfc2e67-8f9a-4dd8-bae4-026923959816"). InnerVolumeSpecName "kube-api-access-pgx4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.846722 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-config-data" (OuterVolumeSpecName: "config-data") pod "1dfc2e67-8f9a-4dd8-bae4-026923959816" (UID: "1dfc2e67-8f9a-4dd8-bae4-026923959816"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.850067 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dfc2e67-8f9a-4dd8-bae4-026923959816" (UID: "1dfc2e67-8f9a-4dd8-bae4-026923959816"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.907674 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.907708 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgx4d\" (UniqueName: \"kubernetes.io/projected/1dfc2e67-8f9a-4dd8-bae4-026923959816-kube-api-access-pgx4d\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:02 crc kubenswrapper[4860]: I1014 15:13:02.907721 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc2e67-8f9a-4dd8-bae4-026923959816-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.057806 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.058224 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="22d1e6d4-a98e-457e-9e99-8e2f4319031b" containerName="kube-state-metrics" containerID="cri-o://8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9" gracePeriod=30 Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.110439 4860 generic.go:334] "Generic (PLEG): container finished" podID="1dfc2e67-8f9a-4dd8-bae4-026923959816" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" exitCode=0 Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.110507 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.110514 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc2e67-8f9a-4dd8-bae4-026923959816","Type":"ContainerDied","Data":"36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195"} Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.110569 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc2e67-8f9a-4dd8-bae4-026923959816","Type":"ContainerDied","Data":"78d028b28528f5a98982f0f09bf12429b33d6abc1535d4671e98391d57cfc8b0"} Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.110587 4860 scope.go:117] "RemoveContainer" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.136655 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.149808 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.157868 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:03 crc kubenswrapper[4860]: E1014 15:13:03.159321 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17051e3-47fc-4f95-8442-3ff6327fadf7" containerName="nova-manage" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159339 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17051e3-47fc-4f95-8442-3ff6327fadf7" containerName="nova-manage" Oct 14 15:13:03 crc kubenswrapper[4860]: E1014 15:13:03.159350 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfc2e67-8f9a-4dd8-bae4-026923959816" containerName="nova-scheduler-scheduler" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159357 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfc2e67-8f9a-4dd8-bae4-026923959816" containerName="nova-scheduler-scheduler" Oct 14 15:13:03 crc kubenswrapper[4860]: E1014 15:13:03.159383 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" containerName="init" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159389 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" containerName="init" Oct 14 15:13:03 crc kubenswrapper[4860]: E1014 15:13:03.159417 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" containerName="dnsmasq-dns" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159423 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" containerName="dnsmasq-dns" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159607 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17051e3-47fc-4f95-8442-3ff6327fadf7" containerName="nova-manage" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159639 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a64287-efcb-40a1-a986-7554e896bf83" containerName="dnsmasq-dns" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.159650 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfc2e67-8f9a-4dd8-bae4-026923959816" containerName="nova-scheduler-scheduler" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.160355 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.163665 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.183044 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.210841 4860 scope.go:117] "RemoveContainer" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" Oct 14 15:13:03 crc kubenswrapper[4860]: E1014 15:13:03.213958 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195\": container with ID starting with 36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195 not found: ID does not exist" containerID="36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.213992 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195"} err="failed to get container status \"36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195\": rpc error: code = NotFound desc = could not find container \"36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195\": container with ID starting with 36a2d1212a22775e5e91a5f7537886dac3513252951a79a108937b767c3ac195 not found: ID does not exist" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.215485 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbv9\" (UniqueName: \"kubernetes.io/projected/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-kube-api-access-wpbv9\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.215681 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.220627 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-config-data\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.323205 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbv9\" (UniqueName: \"kubernetes.io/projected/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-kube-api-access-wpbv9\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.323282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.323328 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-config-data\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.329497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-config-data\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.336880 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.362621 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbv9\" (UniqueName: \"kubernetes.io/projected/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-kube-api-access-wpbv9\") pod \"nova-scheduler-0\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.534551 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.562149 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.574736 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.639565 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzfzj\" (UniqueName: \"kubernetes.io/projected/4c0c8cbd-2256-4261-9bf5-a62952d239b4-kube-api-access-wzfzj\") pod \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.639604 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-config-data\") pod \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.639685 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn4f4\" (UniqueName: \"kubernetes.io/projected/22d1e6d4-a98e-457e-9e99-8e2f4319031b-kube-api-access-hn4f4\") pod \"22d1e6d4-a98e-457e-9e99-8e2f4319031b\" (UID: \"22d1e6d4-a98e-457e-9e99-8e2f4319031b\") " Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.639766 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-combined-ca-bundle\") pod \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.639814 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-scripts\") pod \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\" (UID: \"4c0c8cbd-2256-4261-9bf5-a62952d239b4\") " Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.648109 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d1e6d4-a98e-457e-9e99-8e2f4319031b-kube-api-access-hn4f4" (OuterVolumeSpecName: "kube-api-access-hn4f4") pod "22d1e6d4-a98e-457e-9e99-8e2f4319031b" (UID: "22d1e6d4-a98e-457e-9e99-8e2f4319031b"). InnerVolumeSpecName "kube-api-access-hn4f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.651865 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-scripts" (OuterVolumeSpecName: "scripts") pod "4c0c8cbd-2256-4261-9bf5-a62952d239b4" (UID: "4c0c8cbd-2256-4261-9bf5-a62952d239b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.653379 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0c8cbd-2256-4261-9bf5-a62952d239b4-kube-api-access-wzfzj" (OuterVolumeSpecName: "kube-api-access-wzfzj") pod "4c0c8cbd-2256-4261-9bf5-a62952d239b4" (UID: "4c0c8cbd-2256-4261-9bf5-a62952d239b4"). InnerVolumeSpecName "kube-api-access-wzfzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.675220 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-config-data" (OuterVolumeSpecName: "config-data") pod "4c0c8cbd-2256-4261-9bf5-a62952d239b4" (UID: "4c0c8cbd-2256-4261-9bf5-a62952d239b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.695958 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c0c8cbd-2256-4261-9bf5-a62952d239b4" (UID: "4c0c8cbd-2256-4261-9bf5-a62952d239b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.742437 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn4f4\" (UniqueName: \"kubernetes.io/projected/22d1e6d4-a98e-457e-9e99-8e2f4319031b-kube-api-access-hn4f4\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.742755 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.742764 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.742774 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzfzj\" (UniqueName: \"kubernetes.io/projected/4c0c8cbd-2256-4261-9bf5-a62952d239b4-kube-api-access-wzfzj\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:03 crc kubenswrapper[4860]: I1014 15:13:03.742782 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c0c8cbd-2256-4261-9bf5-a62952d239b4-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.077501 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.136852 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1","Type":"ContainerStarted","Data":"6b28eacfe603b5e2cb7e1a0c692203244ac263ec058f3551c51bbaf023fd5506"} Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.139706 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.140545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-svrr2" event={"ID":"4c0c8cbd-2256-4261-9bf5-a62952d239b4","Type":"ContainerDied","Data":"e8d437b53eb7969ef43a4dbff9692321cd093baa22e145bf3923a9892e0b6797"} Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.140578 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d437b53eb7969ef43a4dbff9692321cd093baa22e145bf3923a9892e0b6797" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.140630 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-svrr2" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.213236 4860 generic.go:334] "Generic (PLEG): container finished" podID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerID="1005593f4f4a0c8650e91b89fbefd21aba62a3b1df7aa7f36deb9f33ed947509" exitCode=0 Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.213306 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585","Type":"ContainerDied","Data":"1005593f4f4a0c8650e91b89fbefd21aba62a3b1df7aa7f36deb9f33ed947509"} Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.213337 4860 scope.go:117] "RemoveContainer" containerID="1005593f4f4a0c8650e91b89fbefd21aba62a3b1df7aa7f36deb9f33ed947509" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.213423 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.227535 4860 generic.go:334] "Generic (PLEG): container finished" podID="22d1e6d4-a98e-457e-9e99-8e2f4319031b" containerID="8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9" exitCode=2 Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.227606 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.227613 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22d1e6d4-a98e-457e-9e99-8e2f4319031b","Type":"ContainerDied","Data":"8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9"} Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.227716 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22d1e6d4-a98e-457e-9e99-8e2f4319031b","Type":"ContainerDied","Data":"f191838b78886f436c9c36b8e9d5a93bc4b6b27959fcc80b2d3bff665262471f"} Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.231173 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: E1014 15:13:04.232045 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-api" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232068 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-api" Oct 14 15:13:04 crc kubenswrapper[4860]: E1014 15:13:04.232088 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-log" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232096 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-log" Oct 14 15:13:04 crc kubenswrapper[4860]: E1014 15:13:04.232134 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0c8cbd-2256-4261-9bf5-a62952d239b4" containerName="nova-cell1-conductor-db-sync" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232142 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0c8cbd-2256-4261-9bf5-a62952d239b4" containerName="nova-cell1-conductor-db-sync" Oct 14 15:13:04 crc kubenswrapper[4860]: E1014 15:13:04.232159 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d1e6d4-a98e-457e-9e99-8e2f4319031b" containerName="kube-state-metrics" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232169 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d1e6d4-a98e-457e-9e99-8e2f4319031b" containerName="kube-state-metrics" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232361 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-api" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232380 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0c8cbd-2256-4261-9bf5-a62952d239b4" containerName="nova-cell1-conductor-db-sync" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232393 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d1e6d4-a98e-457e-9e99-8e2f4319031b" containerName="kube-state-metrics" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.232404 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" containerName="nova-api-log" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.241280 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.243698 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.264892 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-logs\") pod \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.264990 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-combined-ca-bundle\") pod \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.265082 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-config-data\") pod \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.265219 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28f6w\" (UniqueName: \"kubernetes.io/projected/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-kube-api-access-28f6w\") pod \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\" (UID: \"3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585\") " Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.265568 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eab7d3-1474-46a2-85f7-9f874511aea2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.265615 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eab7d3-1474-46a2-85f7-9f874511aea2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.265655 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4psxh\" (UniqueName: \"kubernetes.io/projected/37eab7d3-1474-46a2-85f7-9f874511aea2-kube-api-access-4psxh\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.266657 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-logs" (OuterVolumeSpecName: "logs") pod "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" (UID: "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.276618 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-kube-api-access-28f6w" (OuterVolumeSpecName: "kube-api-access-28f6w") pod "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" (UID: "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585"). InnerVolumeSpecName "kube-api-access-28f6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.284114 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.296340 4860 scope.go:117] "RemoveContainer" containerID="d05c224a109f3ff601802367bb39e4c39f953f77d4ad7e2f0ed5ad5e2d14371f" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.343133 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" (UID: "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.344393 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.352661 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.361694 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.364950 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.368569 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eab7d3-1474-46a2-85f7-9f874511aea2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.368627 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eab7d3-1474-46a2-85f7-9f874511aea2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.368666 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4psxh\" (UniqueName: \"kubernetes.io/projected/37eab7d3-1474-46a2-85f7-9f874511aea2-kube-api-access-4psxh\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.368833 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28f6w\" (UniqueName: \"kubernetes.io/projected/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-kube-api-access-28f6w\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.368847 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.368857 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.369721 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.371633 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-config-data" (OuterVolumeSpecName: "config-data") pod "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" (UID: "3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.374511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eab7d3-1474-46a2-85f7-9f874511aea2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.383688 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eab7d3-1474-46a2-85f7-9f874511aea2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.384007 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.384210 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.387638 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4psxh\" (UniqueName: \"kubernetes.io/projected/37eab7d3-1474-46a2-85f7-9f874511aea2-kube-api-access-4psxh\") pod \"nova-cell1-conductor-0\" (UID: \"37eab7d3-1474-46a2-85f7-9f874511aea2\") " pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.453408 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.474464 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.474535 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd5bl\" (UniqueName: \"kubernetes.io/projected/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-api-access-xd5bl\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.474592 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.474647 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.474768 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.479189 4860 scope.go:117] "RemoveContainer" containerID="8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.581390 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.581508 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd5bl\" (UniqueName: \"kubernetes.io/projected/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-api-access-xd5bl\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.582015 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.583152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.587924 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.595233 4860 scope.go:117] "RemoveContainer" containerID="8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9" Oct 14 15:13:04 crc kubenswrapper[4860]: E1014 15:13:04.595728 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9\": container with ID starting with 8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9 not found: ID does not exist" containerID="8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.595772 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9"} err="failed to get container status \"8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9\": rpc error: code = NotFound desc = could not find container \"8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9\": container with ID starting with 8b165450c585599762306c6be1d877e6149fd2a26a9a1bfabf04baac74422bb9 not found: ID does not exist" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.595821 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.596061 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.596236 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.628266 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd5bl\" (UniqueName: \"kubernetes.io/projected/6922ab3e-5c2c-43d1-8b29-824fd8c4146c-kube-api-access-xd5bl\") pod \"kube-state-metrics-0\" (UID: \"6922ab3e-5c2c-43d1-8b29-824fd8c4146c\") " pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.628889 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.637274 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.639386 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.641317 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.647263 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.688673 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4kx\" (UniqueName: \"kubernetes.io/projected/5a10352d-79c7-44da-b182-2fe199712ddf-kube-api-access-wl4kx\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.688763 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.688786 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-config-data\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.688955 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a10352d-79c7-44da-b182-2fe199712ddf-logs\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.790268 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.790315 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-config-data\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.790423 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a10352d-79c7-44da-b182-2fe199712ddf-logs\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.790572 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl4kx\" (UniqueName: \"kubernetes.io/projected/5a10352d-79c7-44da-b182-2fe199712ddf-kube-api-access-wl4kx\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.791373 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a10352d-79c7-44da-b182-2fe199712ddf-logs\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.793962 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.795634 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-config-data\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.813167 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl4kx\" (UniqueName: \"kubernetes.io/projected/5a10352d-79c7-44da-b182-2fe199712ddf-kube-api-access-wl4kx\") pod \"nova-api-0\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " pod="openstack/nova-api-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.883446 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 15:13:04 crc kubenswrapper[4860]: I1014 15:13:04.968917 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.030077 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.087290 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfc2e67-8f9a-4dd8-bae4-026923959816" path="/var/lib/kubelet/pods/1dfc2e67-8f9a-4dd8-bae4-026923959816/volumes" Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.096363 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d1e6d4-a98e-457e-9e99-8e2f4319031b" path="/var/lib/kubelet/pods/22d1e6d4-a98e-457e-9e99-8e2f4319031b/volumes" Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.097285 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585" path="/var/lib/kubelet/pods/3eb06fd8-86e9-4c0a-afb0-2cd11ea4c585/volumes" Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.282438 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1","Type":"ContainerStarted","Data":"5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20"} Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.317350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"37eab7d3-1474-46a2-85f7-9f874511aea2","Type":"ContainerStarted","Data":"03a1e3e79ab587cf06f4bbf952ca274883bca7f0a8782d051db94fc2abbcc6ba"} Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.332895 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.332876689 podStartE2EDuration="2.332876689s" podCreationTimestamp="2025-10-14 15:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:05.315159282 +0000 UTC m=+1446.901942741" watchObservedRunningTime="2025-10-14 15:13:05.332876689 +0000 UTC m=+1446.919660128" Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.344598 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 15:13:05 crc kubenswrapper[4860]: I1014 15:13:05.514346 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.107916 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.108285 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.151796 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.152076 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-central-agent" containerID="cri-o://65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" gracePeriod=30 Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.152159 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="sg-core" containerID="cri-o://43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" gracePeriod=30 Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.152191 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="proxy-httpd" containerID="cri-o://cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" gracePeriod=30 Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.152189 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-notification-agent" containerID="cri-o://96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" gracePeriod=30 Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.338789 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a10352d-79c7-44da-b182-2fe199712ddf","Type":"ContainerStarted","Data":"5bd2c94aa6876369db2ed742cf427a87fa3fa1744ca88600b9d113d24843952b"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.338831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a10352d-79c7-44da-b182-2fe199712ddf","Type":"ContainerStarted","Data":"a94b6b5b5b86da099b5b12ffc576198382a1b74bca44a8a68e2e4fb6f1eaf888"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.338857 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a10352d-79c7-44da-b182-2fe199712ddf","Type":"ContainerStarted","Data":"8a53b59a4d3961d320e26378c86e06f3c1df8595efb610f31e1cc1eb2291a2ff"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.350128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6922ab3e-5c2c-43d1-8b29-824fd8c4146c","Type":"ContainerStarted","Data":"0876b03b42f5f24b52fa6864b772dd4c42dcfca03eb7d08cc815bca05317c2b4"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.350178 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6922ab3e-5c2c-43d1-8b29-824fd8c4146c","Type":"ContainerStarted","Data":"10f5e24dfbb605574a7f8ae763fbc0766aabc2e69524c13a8a84056099937d2b"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.351118 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.363244 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"37eab7d3-1474-46a2-85f7-9f874511aea2","Type":"ContainerStarted","Data":"4278fb8c978a993b90e2f9fa2e84aeaaa864890e82dbfdcd994e4cdd0eb063cd"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.364128 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.368699 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.368682552 podStartE2EDuration="2.368682552s" podCreationTimestamp="2025-10-14 15:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:06.363417995 +0000 UTC m=+1447.950201444" watchObservedRunningTime="2025-10-14 15:13:06.368682552 +0000 UTC m=+1447.955466001" Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.373092 4860 generic.go:334] "Generic (PLEG): container finished" podID="428069e3-797e-47db-b53e-565cf5a366bd" containerID="43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" exitCode=2 Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.373940 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerDied","Data":"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294"} Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.384000 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.383981112 podStartE2EDuration="2.383981112s" podCreationTimestamp="2025-10-14 15:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:06.380764095 +0000 UTC m=+1447.967547544" watchObservedRunningTime="2025-10-14 15:13:06.383981112 +0000 UTC m=+1447.970764561" Oct 14 15:13:06 crc kubenswrapper[4860]: I1014 15:13:06.404852 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.962492422 podStartE2EDuration="2.404812836s" podCreationTimestamp="2025-10-14 15:13:04 +0000 UTC" firstStartedPulling="2025-10-14 15:13:05.353892617 +0000 UTC m=+1446.940676066" lastFinishedPulling="2025-10-14 15:13:05.796213031 +0000 UTC m=+1447.382996480" observedRunningTime="2025-10-14 15:13:06.404777805 +0000 UTC m=+1447.991561244" watchObservedRunningTime="2025-10-14 15:13:06.404812836 +0000 UTC m=+1447.991596285" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.188960 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shll4" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:07 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:07 crc kubenswrapper[4860]: > Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.372328 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384283 4860 generic.go:334] "Generic (PLEG): container finished" podID="428069e3-797e-47db-b53e-565cf5a366bd" containerID="cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" exitCode=0 Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384312 4860 generic.go:334] "Generic (PLEG): container finished" podID="428069e3-797e-47db-b53e-565cf5a366bd" containerID="96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" exitCode=0 Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384320 4860 generic.go:334] "Generic (PLEG): container finished" podID="428069e3-797e-47db-b53e-565cf5a366bd" containerID="65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" exitCode=0 Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384348 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerDied","Data":"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634"} Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384388 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384409 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerDied","Data":"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81"} Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerDied","Data":"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac"} Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384431 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428069e3-797e-47db-b53e-565cf5a366bd","Type":"ContainerDied","Data":"8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5"} Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.384451 4860 scope.go:117] "RemoveContainer" containerID="cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.414480 4860 scope.go:117] "RemoveContainer" containerID="43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.458368 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-run-httpd\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.458441 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-config-data\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.458531 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-sg-core-conf-yaml\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.459397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-log-httpd\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.459409 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.459586 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-combined-ca-bundle\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.459623 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5vw2\" (UniqueName: \"kubernetes.io/projected/428069e3-797e-47db-b53e-565cf5a366bd-kube-api-access-b5vw2\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.459651 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-scripts\") pod \"428069e3-797e-47db-b53e-565cf5a366bd\" (UID: \"428069e3-797e-47db-b53e-565cf5a366bd\") " Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.460743 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.466938 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.467478 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/428069e3-797e-47db-b53e-565cf5a366bd-kube-api-access-b5vw2" (OuterVolumeSpecName: "kube-api-access-b5vw2") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "kube-api-access-b5vw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.475230 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-scripts" (OuterVolumeSpecName: "scripts") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.485941 4860 scope.go:117] "RemoveContainer" containerID="96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.549549 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.563958 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.564002 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428069e3-797e-47db-b53e-565cf5a366bd-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.564013 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5vw2\" (UniqueName: \"kubernetes.io/projected/428069e3-797e-47db-b53e-565cf5a366bd-kube-api-access-b5vw2\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.564040 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.596472 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.663936 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-config-data" (OuterVolumeSpecName: "config-data") pod "428069e3-797e-47db-b53e-565cf5a366bd" (UID: "428069e3-797e-47db-b53e-565cf5a366bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.666020 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.666153 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428069e3-797e-47db-b53e-565cf5a366bd-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.681968 4860 scope.go:117] "RemoveContainer" containerID="65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.712287 4860 scope.go:117] "RemoveContainer" containerID="cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.712831 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": container with ID starting with cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634 not found: ID does not exist" containerID="cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.712960 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634"} err="failed to get container status \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": rpc error: code = NotFound desc = could not find container \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": container with ID starting with cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.713075 4860 scope.go:117] "RemoveContainer" containerID="43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.713421 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": container with ID starting with 43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294 not found: ID does not exist" containerID="43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.713527 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294"} err="failed to get container status \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": rpc error: code = NotFound desc = could not find container \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": container with ID starting with 43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.713600 4860 scope.go:117] "RemoveContainer" containerID="96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.713892 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": container with ID starting with 96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81 not found: ID does not exist" containerID="96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.713995 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81"} err="failed to get container status \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": rpc error: code = NotFound desc = could not find container \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": container with ID starting with 96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.714103 4860 scope.go:117] "RemoveContainer" containerID="65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.715224 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": container with ID starting with 65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac not found: ID does not exist" containerID="65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.715280 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac"} err="failed to get container status \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": rpc error: code = NotFound desc = could not find container \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": container with ID starting with 65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.715309 4860 scope.go:117] "RemoveContainer" containerID="cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.715742 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634"} err="failed to get container status \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": rpc error: code = NotFound desc = could not find container \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": container with ID starting with cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.715807 4860 scope.go:117] "RemoveContainer" containerID="43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.716224 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294"} err="failed to get container status \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": rpc error: code = NotFound desc = could not find container \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": container with ID starting with 43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.716306 4860 scope.go:117] "RemoveContainer" containerID="96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.716682 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81"} err="failed to get container status \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": rpc error: code = NotFound desc = could not find container \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": container with ID starting with 96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.716707 4860 scope.go:117] "RemoveContainer" containerID="65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.717021 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac"} err="failed to get container status \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": rpc error: code = NotFound desc = could not find container \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": container with ID starting with 65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.717159 4860 scope.go:117] "RemoveContainer" containerID="cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.717565 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634"} err="failed to get container status \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": rpc error: code = NotFound desc = could not find container \"cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634\": container with ID starting with cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.717602 4860 scope.go:117] "RemoveContainer" containerID="43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.717910 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294"} err="failed to get container status \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": rpc error: code = NotFound desc = could not find container \"43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294\": container with ID starting with 43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.717952 4860 scope.go:117] "RemoveContainer" containerID="96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.718217 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81"} err="failed to get container status \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": rpc error: code = NotFound desc = could not find container \"96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81\": container with ID starting with 96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81 not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.718234 4860 scope.go:117] "RemoveContainer" containerID="65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.718523 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac"} err="failed to get container status \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": rpc error: code = NotFound desc = could not find container \"65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac\": container with ID starting with 65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac not found: ID does not exist" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.728735 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.738620 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.756230 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.756724 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="sg-core" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.756740 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="sg-core" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.756775 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-central-agent" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.756784 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-central-agent" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.756809 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="proxy-httpd" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.756818 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="proxy-httpd" Oct 14 15:13:07 crc kubenswrapper[4860]: E1014 15:13:07.756837 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-notification-agent" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.756845 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-notification-agent" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.757115 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-notification-agent" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.757140 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="sg-core" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.757158 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="ceilometer-central-agent" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.757172 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="428069e3-797e-47db-b53e-565cf5a366bd" containerName="proxy-httpd" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.758882 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.765120 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.768043 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.768264 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.768382 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871195 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rrc\" (UniqueName: \"kubernetes.io/projected/7f848bea-23dc-4318-9677-ebbd4fe34a09-kube-api-access-88rrc\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871364 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871398 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-config-data\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871449 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-scripts\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871470 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-run-httpd\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871544 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.871594 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-log-httpd\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.972799 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.972841 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-config-data\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.972899 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-scripts\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.972919 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.972948 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-run-httpd\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.972984 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.973018 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-log-httpd\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.973050 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rrc\" (UniqueName: \"kubernetes.io/projected/7f848bea-23dc-4318-9677-ebbd4fe34a09-kube-api-access-88rrc\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.974305 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-run-httpd\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.974297 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-log-httpd\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.979083 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.982786 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.992624 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rrc\" (UniqueName: \"kubernetes.io/projected/7f848bea-23dc-4318-9677-ebbd4fe34a09-kube-api-access-88rrc\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:07 crc kubenswrapper[4860]: I1014 15:13:07.993079 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-scripts\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:08 crc kubenswrapper[4860]: I1014 15:13:08.007917 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-config-data\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:08 crc kubenswrapper[4860]: I1014 15:13:08.012358 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " pod="openstack/ceilometer-0" Oct 14 15:13:08 crc kubenswrapper[4860]: I1014 15:13:08.077265 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:08 crc kubenswrapper[4860]: I1014 15:13:08.535454 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 15:13:08 crc kubenswrapper[4860]: I1014 15:13:08.581075 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:09 crc kubenswrapper[4860]: I1014 15:13:09.071227 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="428069e3-797e-47db-b53e-565cf5a366bd" path="/var/lib/kubelet/pods/428069e3-797e-47db-b53e-565cf5a366bd/volumes" Oct 14 15:13:09 crc kubenswrapper[4860]: I1014 15:13:09.416593 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerStarted","Data":"a1ae726fdad9bd4c1a805c5a223bc9c2fdf8105e07e4a0e31b8b6c8d53088aed"} Oct 14 15:13:10 crc kubenswrapper[4860]: I1014 15:13:10.228270 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:13:10 crc kubenswrapper[4860]: I1014 15:13:10.294838 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:13:10 crc kubenswrapper[4860]: I1014 15:13:10.428363 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerStarted","Data":"d4fae14577e2a847b30a1a842fd9d826abb6290d24db28d038fd79af513eb671"} Oct 14 15:13:10 crc kubenswrapper[4860]: I1014 15:13:10.428424 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerStarted","Data":"cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4"} Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.439589 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerStarted","Data":"cf9e19032233da7146e6d9aa5110888e098e418f536a5e859c531a90761c6e80"} Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.691295 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6gkdc"] Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.712251 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.726643 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gkdc"] Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.857528 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-utilities\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.857625 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzj7d\" (UniqueName: \"kubernetes.io/projected/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-kube-api-access-fzj7d\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.857861 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-catalog-content\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.959334 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-catalog-content\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.959418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-utilities\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.959500 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzj7d\" (UniqueName: \"kubernetes.io/projected/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-kube-api-access-fzj7d\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.959922 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-utilities\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.959928 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-catalog-content\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:11 crc kubenswrapper[4860]: I1014 15:13:11.991822 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzj7d\" (UniqueName: \"kubernetes.io/projected/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-kube-api-access-fzj7d\") pod \"certified-operators-6gkdc\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:12 crc kubenswrapper[4860]: I1014 15:13:12.039681 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:12 crc kubenswrapper[4860]: I1014 15:13:12.546532 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gkdc"] Oct 14 15:13:12 crc kubenswrapper[4860]: I1014 15:13:12.644310 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sskvp"] Oct 14 15:13:12 crc kubenswrapper[4860]: I1014 15:13:12.644541 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sskvp" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="registry-server" containerID="cri-o://4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053" gracePeriod=2 Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.152961 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:13:13 crc kubenswrapper[4860]: E1014 15:13:13.199326 4860 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/549168ce75f2b34bc3911a50c747d903b0d45429f7cebf9eb9b93631d667f8e2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/549168ce75f2b34bc3911a50c747d903b0d45429f7cebf9eb9b93631d667f8e2/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_kube-state-metrics-0_22d1e6d4-a98e-457e-9e99-8e2f4319031b/kube-state-metrics/0.log" to get inode usage: stat /var/log/pods/openstack_kube-state-metrics-0_22d1e6d4-a98e-457e-9e99-8e2f4319031b/kube-state-metrics/0.log: no such file or directory Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.286880 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcr4s\" (UniqueName: \"kubernetes.io/projected/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-kube-api-access-jcr4s\") pod \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.287130 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-catalog-content\") pod \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.287279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-utilities\") pod \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\" (UID: \"ee6fbfd1-847b-4c26-ad01-dcc5e138f530\") " Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.287675 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-utilities" (OuterVolumeSpecName: "utilities") pod "ee6fbfd1-847b-4c26-ad01-dcc5e138f530" (UID: "ee6fbfd1-847b-4c26-ad01-dcc5e138f530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.287865 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.295231 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-kube-api-access-jcr4s" (OuterVolumeSpecName: "kube-api-access-jcr4s") pod "ee6fbfd1-847b-4c26-ad01-dcc5e138f530" (UID: "ee6fbfd1-847b-4c26-ad01-dcc5e138f530"). InnerVolumeSpecName "kube-api-access-jcr4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.345051 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee6fbfd1-847b-4c26-ad01-dcc5e138f530" (UID: "ee6fbfd1-847b-4c26-ad01-dcc5e138f530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.390092 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.390137 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcr4s\" (UniqueName: \"kubernetes.io/projected/ee6fbfd1-847b-4c26-ad01-dcc5e138f530-kube-api-access-jcr4s\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.472145 4860 generic.go:334] "Generic (PLEG): container finished" podID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerID="000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524" exitCode=0 Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.472227 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerDied","Data":"000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524"} Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.472254 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerStarted","Data":"ebdbf19d9902861a063c5c0276e63dd62d6bb9ac06a2a34e83eb6579f23999a8"} Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.479727 4860 generic.go:334] "Generic (PLEG): container finished" podID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerID="4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053" exitCode=0 Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.479772 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerDied","Data":"4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053"} Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.479801 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sskvp" event={"ID":"ee6fbfd1-847b-4c26-ad01-dcc5e138f530","Type":"ContainerDied","Data":"be70dfc50eba5eabacb61329ba933b866e1f91f9c1681421fa8aceac02c79a1d"} Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.479822 4860 scope.go:117] "RemoveContainer" containerID="4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.479833 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sskvp" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.526174 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sskvp"] Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.526695 4860 scope.go:117] "RemoveContainer" containerID="ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.535660 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.538270 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sskvp"] Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.561547 4860 scope.go:117] "RemoveContainer" containerID="392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.579952 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.603611 4860 scope.go:117] "RemoveContainer" containerID="4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053" Oct 14 15:13:13 crc kubenswrapper[4860]: E1014 15:13:13.604193 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053\": container with ID starting with 4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053 not found: ID does not exist" containerID="4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.604247 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053"} err="failed to get container status \"4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053\": rpc error: code = NotFound desc = could not find container \"4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053\": container with ID starting with 4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053 not found: ID does not exist" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.604274 4860 scope.go:117] "RemoveContainer" containerID="ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59" Oct 14 15:13:13 crc kubenswrapper[4860]: E1014 15:13:13.604706 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59\": container with ID starting with ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59 not found: ID does not exist" containerID="ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.604751 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59"} err="failed to get container status \"ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59\": rpc error: code = NotFound desc = could not find container \"ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59\": container with ID starting with ccdaed392499debee92431912777d9f7c2ee643097dbbb465631d97de2983f59 not found: ID does not exist" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.604776 4860 scope.go:117] "RemoveContainer" containerID="392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12" Oct 14 15:13:13 crc kubenswrapper[4860]: E1014 15:13:13.605015 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12\": container with ID starting with 392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12 not found: ID does not exist" containerID="392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12" Oct 14 15:13:13 crc kubenswrapper[4860]: I1014 15:13:13.605055 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12"} err="failed to get container status \"392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12\": rpc error: code = NotFound desc = could not find container \"392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12\": container with ID starting with 392ab0475c004969d0246451aa0dc89b0934b08e9deb8e37d72f205470004c12 not found: ID does not exist" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.493249 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.504390 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerStarted","Data":"5d4f8f0f82d6fc7894a387ca38ca4cc8427e55704c03907f624d2a559b98dd92"} Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.505305 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.575632 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.229321805 podStartE2EDuration="7.575608741s" podCreationTimestamp="2025-10-14 15:13:07 +0000 UTC" firstStartedPulling="2025-10-14 15:13:08.590786496 +0000 UTC m=+1450.177569945" lastFinishedPulling="2025-10-14 15:13:13.937073432 +0000 UTC m=+1455.523856881" observedRunningTime="2025-10-14 15:13:14.562502333 +0000 UTC m=+1456.149285792" watchObservedRunningTime="2025-10-14 15:13:14.575608741 +0000 UTC m=+1456.162392190" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.799537 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.901214 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.970477 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:13:14 crc kubenswrapper[4860]: I1014 15:13:14.970544 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.061080 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9g2sn"] Oct 14 15:13:15 crc kubenswrapper[4860]: E1014 15:13:15.061480 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="registry-server" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.061500 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="registry-server" Oct 14 15:13:15 crc kubenswrapper[4860]: E1014 15:13:15.061523 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="extract-content" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.061530 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="extract-content" Oct 14 15:13:15 crc kubenswrapper[4860]: E1014 15:13:15.061542 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="extract-utilities" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.061549 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="extract-utilities" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.061722 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" containerName="registry-server" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.063196 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.099001 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee6fbfd1-847b-4c26-ad01-dcc5e138f530" path="/var/lib/kubelet/pods/ee6fbfd1-847b-4c26-ad01-dcc5e138f530/volumes" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.101160 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g2sn"] Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.141455 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqhx\" (UniqueName: \"kubernetes.io/projected/2229cb3a-9254-47e8-8006-715670fb974e-kube-api-access-9pqhx\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.141530 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-catalog-content\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.141678 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-utilities\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.243751 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pqhx\" (UniqueName: \"kubernetes.io/projected/2229cb3a-9254-47e8-8006-715670fb974e-kube-api-access-9pqhx\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.243817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-catalog-content\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.243984 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-utilities\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.244490 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-utilities\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.244708 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-catalog-content\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.270124 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pqhx\" (UniqueName: \"kubernetes.io/projected/2229cb3a-9254-47e8-8006-715670fb974e-kube-api-access-9pqhx\") pod \"redhat-marketplace-9g2sn\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.395845 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:15 crc kubenswrapper[4860]: I1014 15:13:15.582117 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerStarted","Data":"c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248"} Oct 14 15:13:16 crc kubenswrapper[4860]: I1014 15:13:16.053217 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 15:13:16 crc kubenswrapper[4860]: I1014 15:13:16.053388 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 15:13:16 crc kubenswrapper[4860]: I1014 15:13:16.077158 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g2sn"] Oct 14 15:13:16 crc kubenswrapper[4860]: I1014 15:13:16.591300 4860 generic.go:334] "Generic (PLEG): container finished" podID="2229cb3a-9254-47e8-8006-715670fb974e" containerID="0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a" exitCode=0 Oct 14 15:13:16 crc kubenswrapper[4860]: I1014 15:13:16.591366 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerDied","Data":"0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a"} Oct 14 15:13:16 crc kubenswrapper[4860]: I1014 15:13:16.591650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerStarted","Data":"b3d7f0674243632a0c28a5f1cc1a1a9a9bcb8488739a73e64f89689d26d5100e"} Oct 14 15:13:17 crc kubenswrapper[4860]: I1014 15:13:17.165895 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shll4" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:17 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:17 crc kubenswrapper[4860]: > Oct 14 15:13:18 crc kubenswrapper[4860]: E1014 15:13:18.959442 4860 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5: Error finding container 8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5: Status 404 returned error can't find the container with id 8c64dd1e9b7be7746f7c2c4ecc49b7aae82d22d96b0e51d7c33876238f59f6e5 Oct 14 15:13:19 crc kubenswrapper[4860]: E1014 15:13:19.190152 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-conmon-96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-conmon-cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee6fbfd1_847b_4c26_ad01_dcc5e138f530.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee6fbfd1_847b_4c26_ad01_dcc5e138f530.slice/crio-conmon-4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-96885b35f274cda3eceaa54b23e4ec3043b7f836cc12c15edddb6335e8279a81.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ff86ff_9bf0_4ec8_b801_40fc71e53742.slice/crio-e4af3749f69dc7206fce76cc17289230ab9bd5713394fc5742b9a9e1d718a2e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee6fbfd1_847b_4c26_ad01_dcc5e138f530.slice/crio-4a327b8e79844b8373c75c4ac0e7db05a687f9fecb137cfef5b4de64be34e053.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-conmon-65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee6fbfd1_847b_4c26_ad01_dcc5e138f530.slice/crio-be70dfc50eba5eabacb61329ba933b866e1f91f9c1681421fa8aceac02c79a1d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-65a542ad15b90117f88c8e66098cee4d164a313a462cde3b68138e87cdbfa8ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-conmon-43718f18f6ec496be7ce9f0dbb30388c5f8151012629e8280a900020e604d294.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428069e3_797e_47db_b53e_565cf5a366bd.slice/crio-cf94c6b4d253cd0c5aa48c51b130e977500e0c7a34da7ad94bef3bb81300b634.scope\": RecentStats: unable to find data in memory cache]" Oct 14 15:13:19 crc kubenswrapper[4860]: I1014 15:13:19.621195 4860 generic.go:334] "Generic (PLEG): container finished" podID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerID="e4af3749f69dc7206fce76cc17289230ab9bd5713394fc5742b9a9e1d718a2e9" exitCode=137 Oct 14 15:13:19 crc kubenswrapper[4860]: I1014 15:13:19.621271 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9ff86ff-9bf0-4ec8-b801-40fc71e53742","Type":"ContainerDied","Data":"e4af3749f69dc7206fce76cc17289230ab9bd5713394fc5742b9a9e1d718a2e9"} Oct 14 15:13:19 crc kubenswrapper[4860]: I1014 15:13:19.622876 4860 generic.go:334] "Generic (PLEG): container finished" podID="1aad2102-2a68-40bf-9509-8ca72c8cb48a" containerID="1cfc687799c01f4b994d8f30fbefeddbd778f82c956d03661db83b21df08b223" exitCode=137 Oct 14 15:13:19 crc kubenswrapper[4860]: I1014 15:13:19.622904 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad2102-2a68-40bf-9509-8ca72c8cb48a","Type":"ContainerDied","Data":"1cfc687799c01f4b994d8f30fbefeddbd778f82c956d03661db83b21df08b223"} Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.106375 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.112909 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.231990 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-combined-ca-bundle\") pod \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232173 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-kube-api-access-4lgp8\") pod \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232255 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-combined-ca-bundle\") pod \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232290 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-logs\") pod \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-config-data\") pod \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\" (UID: \"e9ff86ff-9bf0-4ec8-b801-40fc71e53742\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232381 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-config-data\") pod \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232445 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24tvj\" (UniqueName: \"kubernetes.io/projected/1aad2102-2a68-40bf-9509-8ca72c8cb48a-kube-api-access-24tvj\") pod \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\" (UID: \"1aad2102-2a68-40bf-9509-8ca72c8cb48a\") " Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.232664 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-logs" (OuterVolumeSpecName: "logs") pod "e9ff86ff-9bf0-4ec8-b801-40fc71e53742" (UID: "e9ff86ff-9bf0-4ec8-b801-40fc71e53742"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.233259 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.245907 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aad2102-2a68-40bf-9509-8ca72c8cb48a-kube-api-access-24tvj" (OuterVolumeSpecName: "kube-api-access-24tvj") pod "1aad2102-2a68-40bf-9509-8ca72c8cb48a" (UID: "1aad2102-2a68-40bf-9509-8ca72c8cb48a"). InnerVolumeSpecName "kube-api-access-24tvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.246139 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-kube-api-access-4lgp8" (OuterVolumeSpecName: "kube-api-access-4lgp8") pod "e9ff86ff-9bf0-4ec8-b801-40fc71e53742" (UID: "e9ff86ff-9bf0-4ec8-b801-40fc71e53742"). InnerVolumeSpecName "kube-api-access-4lgp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.268931 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9ff86ff-9bf0-4ec8-b801-40fc71e53742" (UID: "e9ff86ff-9bf0-4ec8-b801-40fc71e53742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.277217 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-config-data" (OuterVolumeSpecName: "config-data") pod "1aad2102-2a68-40bf-9509-8ca72c8cb48a" (UID: "1aad2102-2a68-40bf-9509-8ca72c8cb48a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.282176 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1aad2102-2a68-40bf-9509-8ca72c8cb48a" (UID: "1aad2102-2a68-40bf-9509-8ca72c8cb48a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.309539 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-config-data" (OuterVolumeSpecName: "config-data") pod "e9ff86ff-9bf0-4ec8-b801-40fc71e53742" (UID: "e9ff86ff-9bf0-4ec8-b801-40fc71e53742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.335199 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24tvj\" (UniqueName: \"kubernetes.io/projected/1aad2102-2a68-40bf-9509-8ca72c8cb48a-kube-api-access-24tvj\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.335239 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.335252 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-kube-api-access-4lgp8\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.335262 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.335274 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ff86ff-9bf0-4ec8-b801-40fc71e53742-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.335288 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad2102-2a68-40bf-9509-8ca72c8cb48a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.640152 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerStarted","Data":"e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31"} Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.644603 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9ff86ff-9bf0-4ec8-b801-40fc71e53742","Type":"ContainerDied","Data":"74f7cf205f0b972ef75bfd31326c11468a0de94409b88c6ad4801e0e4e1094f9"} Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.644653 4860 scope.go:117] "RemoveContainer" containerID="e4af3749f69dc7206fce76cc17289230ab9bd5713394fc5742b9a9e1d718a2e9" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.644772 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.652178 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad2102-2a68-40bf-9509-8ca72c8cb48a","Type":"ContainerDied","Data":"d73eb10d78f8cd368c14ad29a2c32a91b3cc056820e0fc4064c0cd3b3873a8b1"} Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.652289 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.736253 4860 scope.go:117] "RemoveContainer" containerID="f43ede1457a38f9a73e73e0feb5dd3eeb9cf6d3d98552faffd1e3f9786acb456" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.751311 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.773072 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.777659 4860 scope.go:117] "RemoveContainer" containerID="1cfc687799c01f4b994d8f30fbefeddbd778f82c956d03661db83b21df08b223" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.784594 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.807357 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.832008 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: E1014 15:13:20.832751 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-metadata" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.832764 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-metadata" Oct 14 15:13:20 crc kubenswrapper[4860]: E1014 15:13:20.832786 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aad2102-2a68-40bf-9509-8ca72c8cb48a" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.832793 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aad2102-2a68-40bf-9509-8ca72c8cb48a" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 15:13:20 crc kubenswrapper[4860]: E1014 15:13:20.832807 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-log" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.832815 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-log" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.833245 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-metadata" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.833268 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" containerName="nova-metadata-log" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.833302 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aad2102-2a68-40bf-9509-8ca72c8cb48a" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.835271 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.837881 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.838188 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.851673 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.864119 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.866940 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.870114 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.870155 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.870118 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.878010 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945420 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945473 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn97q\" (UniqueName: \"kubernetes.io/projected/d40bc087-e558-4107-8e76-b5daa3ff73c1-kube-api-access-vn97q\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945506 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb99e52e-496c-48e1-a66b-71eb52b04370-logs\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945603 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945663 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-config-data\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945707 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945736 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945772 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78rwq\" (UniqueName: \"kubernetes.io/projected/eb99e52e-496c-48e1-a66b-71eb52b04370-kube-api-access-78rwq\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945809 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:20 crc kubenswrapper[4860]: I1014 15:13:20.945834 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047060 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-config-data\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047114 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78rwq\" (UniqueName: \"kubernetes.io/projected/eb99e52e-496c-48e1-a66b-71eb52b04370-kube-api-access-78rwq\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047204 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047283 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn97q\" (UniqueName: \"kubernetes.io/projected/d40bc087-e558-4107-8e76-b5daa3ff73c1-kube-api-access-vn97q\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047312 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb99e52e-496c-48e1-a66b-71eb52b04370-logs\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.047345 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.048614 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb99e52e-496c-48e1-a66b-71eb52b04370-logs\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.066515 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.066592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.067135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.068678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.069838 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.072697 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40bc087-e558-4107-8e76-b5daa3ff73c1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.075522 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-config-data\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.089386 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn97q\" (UniqueName: \"kubernetes.io/projected/d40bc087-e558-4107-8e76-b5daa3ff73c1-kube-api-access-vn97q\") pod \"nova-cell1-novncproxy-0\" (UID: \"d40bc087-e558-4107-8e76-b5daa3ff73c1\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.099283 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78rwq\" (UniqueName: \"kubernetes.io/projected/eb99e52e-496c-48e1-a66b-71eb52b04370-kube-api-access-78rwq\") pod \"nova-metadata-0\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.103151 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aad2102-2a68-40bf-9509-8ca72c8cb48a" path="/var/lib/kubelet/pods/1aad2102-2a68-40bf-9509-8ca72c8cb48a/volumes" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.103843 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ff86ff-9bf0-4ec8-b801-40fc71e53742" path="/var/lib/kubelet/pods/e9ff86ff-9bf0-4ec8-b801-40fc71e53742/volumes" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.160793 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.191641 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.508110 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.661789 4860 generic.go:334] "Generic (PLEG): container finished" podID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerID="c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248" exitCode=0 Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.661864 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerDied","Data":"c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248"} Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.666001 4860 generic.go:334] "Generic (PLEG): container finished" podID="2229cb3a-9254-47e8-8006-715670fb974e" containerID="e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31" exitCode=0 Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.666073 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerDied","Data":"e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31"} Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.672928 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eb99e52e-496c-48e1-a66b-71eb52b04370","Type":"ContainerStarted","Data":"e0724c063fcb7c82c728ec3718e5eeab219e0ceb74b12a29c2c0e58860cdff3e"} Oct 14 15:13:21 crc kubenswrapper[4860]: I1014 15:13:21.757689 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 15:13:21 crc kubenswrapper[4860]: W1014 15:13:21.758179 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40bc087_e558_4107_8e76_b5daa3ff73c1.slice/crio-6aa3232006ca18ebd71b17a901b58baad531e10909c647d6b0275ad101eaca19 WatchSource:0}: Error finding container 6aa3232006ca18ebd71b17a901b58baad531e10909c647d6b0275ad101eaca19: Status 404 returned error can't find the container with id 6aa3232006ca18ebd71b17a901b58baad531e10909c647d6b0275ad101eaca19 Oct 14 15:13:22 crc kubenswrapper[4860]: I1014 15:13:22.688041 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eb99e52e-496c-48e1-a66b-71eb52b04370","Type":"ContainerStarted","Data":"1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa"} Oct 14 15:13:22 crc kubenswrapper[4860]: I1014 15:13:22.688366 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eb99e52e-496c-48e1-a66b-71eb52b04370","Type":"ContainerStarted","Data":"d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec"} Oct 14 15:13:22 crc kubenswrapper[4860]: I1014 15:13:22.691613 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d40bc087-e558-4107-8e76-b5daa3ff73c1","Type":"ContainerStarted","Data":"013ed2bbe3170654b5b111c3c3ffc4cb8824eed63cd90bef2ddf6e4657a801de"} Oct 14 15:13:22 crc kubenswrapper[4860]: I1014 15:13:22.691774 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d40bc087-e558-4107-8e76-b5daa3ff73c1","Type":"ContainerStarted","Data":"6aa3232006ca18ebd71b17a901b58baad531e10909c647d6b0275ad101eaca19"} Oct 14 15:13:22 crc kubenswrapper[4860]: I1014 15:13:22.724157 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7241387660000003 podStartE2EDuration="2.724138766s" podCreationTimestamp="2025-10-14 15:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:22.71516291 +0000 UTC m=+1464.301946369" watchObservedRunningTime="2025-10-14 15:13:22.724138766 +0000 UTC m=+1464.310922235" Oct 14 15:13:24 crc kubenswrapper[4860]: I1014 15:13:24.710458 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerStarted","Data":"1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16"} Oct 14 15:13:24 crc kubenswrapper[4860]: I1014 15:13:24.714530 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerStarted","Data":"b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c"} Oct 14 15:13:24 crc kubenswrapper[4860]: I1014 15:13:24.734329 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.734307996 podStartE2EDuration="4.734307996s" podCreationTimestamp="2025-10-14 15:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:22.73958621 +0000 UTC m=+1464.326369669" watchObservedRunningTime="2025-10-14 15:13:24.734307996 +0000 UTC m=+1466.321091445" Oct 14 15:13:24 crc kubenswrapper[4860]: I1014 15:13:24.738965 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6gkdc" podStartSLOduration=3.429716493 podStartE2EDuration="13.738945358s" podCreationTimestamp="2025-10-14 15:13:11 +0000 UTC" firstStartedPulling="2025-10-14 15:13:13.475046432 +0000 UTC m=+1455.061829881" lastFinishedPulling="2025-10-14 15:13:23.784275297 +0000 UTC m=+1465.371058746" observedRunningTime="2025-10-14 15:13:24.727169794 +0000 UTC m=+1466.313953273" watchObservedRunningTime="2025-10-14 15:13:24.738945358 +0000 UTC m=+1466.325728807" Oct 14 15:13:24 crc kubenswrapper[4860]: I1014 15:13:24.751949 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9g2sn" podStartSLOduration=2.861524014 podStartE2EDuration="9.751923853s" podCreationTimestamp="2025-10-14 15:13:15 +0000 UTC" firstStartedPulling="2025-10-14 15:13:16.59407247 +0000 UTC m=+1458.180855919" lastFinishedPulling="2025-10-14 15:13:23.484472309 +0000 UTC m=+1465.071255758" observedRunningTime="2025-10-14 15:13:24.745372224 +0000 UTC m=+1466.332155693" watchObservedRunningTime="2025-10-14 15:13:24.751923853 +0000 UTC m=+1466.338707302" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.031425 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.031511 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.032116 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.032161 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.035524 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.037899 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.295815 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7k8rg"] Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.297845 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.317228 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7k8rg"] Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.332274 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-config\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.332406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.332452 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7zm\" (UniqueName: \"kubernetes.io/projected/c3522c43-5736-44e0-8671-da94de73685a-kube-api-access-tm7zm\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.332486 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.332515 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.332550 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.397594 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.397638 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.438120 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.438504 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7zm\" (UniqueName: \"kubernetes.io/projected/c3522c43-5736-44e0-8671-da94de73685a-kube-api-access-tm7zm\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.438551 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.438589 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.438625 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.438708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-config\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.439493 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.439557 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.442806 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.442831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.442852 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-config\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.472902 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7zm\" (UniqueName: \"kubernetes.io/projected/c3522c43-5736-44e0-8671-da94de73685a-kube-api-access-tm7zm\") pod \"dnsmasq-dns-89c5cd4d5-7k8rg\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:25 crc kubenswrapper[4860]: I1014 15:13:25.642495 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.163834 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.164361 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.192066 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.203226 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7k8rg"] Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.464121 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9g2sn" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:26 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:26 crc kubenswrapper[4860]: > Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.732202 4860 generic.go:334] "Generic (PLEG): container finished" podID="c3522c43-5736-44e0-8671-da94de73685a" containerID="0fa828ca29976bb6ec8ca33332d417d275afd207c27eb838c1cbc8ca0c6d24fd" exitCode=0 Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.732431 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" event={"ID":"c3522c43-5736-44e0-8671-da94de73685a","Type":"ContainerDied","Data":"0fa828ca29976bb6ec8ca33332d417d275afd207c27eb838c1cbc8ca0c6d24fd"} Oct 14 15:13:26 crc kubenswrapper[4860]: I1014 15:13:26.732462 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" event={"ID":"c3522c43-5736-44e0-8671-da94de73685a","Type":"ContainerStarted","Data":"539c5220b8168ad5f835919e47ccaf48e069e96b089f25a9aed0eac92d0505ea"} Oct 14 15:13:27 crc kubenswrapper[4860]: I1014 15:13:27.215483 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shll4" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:27 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:27 crc kubenswrapper[4860]: > Oct 14 15:13:27 crc kubenswrapper[4860]: I1014 15:13:27.743929 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" event={"ID":"c3522c43-5736-44e0-8671-da94de73685a","Type":"ContainerStarted","Data":"0357c1e02a8087ed3f3969c78b515489f855eeee95a158b3e7ccaf98be5220da"} Oct 14 15:13:27 crc kubenswrapper[4860]: I1014 15:13:27.744381 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:27 crc kubenswrapper[4860]: I1014 15:13:27.769768 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" podStartSLOduration=2.769750904 podStartE2EDuration="2.769750904s" podCreationTimestamp="2025-10-14 15:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:27.76465911 +0000 UTC m=+1469.351442549" watchObservedRunningTime="2025-10-14 15:13:27.769750904 +0000 UTC m=+1469.356534373" Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.343592 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.343824 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-log" containerID="cri-o://a94b6b5b5b86da099b5b12ffc576198382a1b74bca44a8a68e2e4fb6f1eaf888" gracePeriod=30 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.343915 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-api" containerID="cri-o://5bd2c94aa6876369db2ed742cf427a87fa3fa1744ca88600b9d113d24843952b" gracePeriod=30 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.657633 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.658313 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-central-agent" containerID="cri-o://cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4" gracePeriod=30 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.658678 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="proxy-httpd" containerID="cri-o://5d4f8f0f82d6fc7894a387ca38ca4cc8427e55704c03907f624d2a559b98dd92" gracePeriod=30 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.658714 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="sg-core" containerID="cri-o://cf9e19032233da7146e6d9aa5110888e098e418f536a5e859c531a90761c6e80" gracePeriod=30 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.658761 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-notification-agent" containerID="cri-o://d4fae14577e2a847b30a1a842fd9d826abb6290d24db28d038fd79af513eb671" gracePeriod=30 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.687549 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": EOF" Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.757071 4860 generic.go:334] "Generic (PLEG): container finished" podID="5a10352d-79c7-44da-b182-2fe199712ddf" containerID="a94b6b5b5b86da099b5b12ffc576198382a1b74bca44a8a68e2e4fb6f1eaf888" exitCode=143 Oct 14 15:13:28 crc kubenswrapper[4860]: I1014 15:13:28.757360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a10352d-79c7-44da-b182-2fe199712ddf","Type":"ContainerDied","Data":"a94b6b5b5b86da099b5b12ffc576198382a1b74bca44a8a68e2e4fb6f1eaf888"} Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.245324 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.245380 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:13:29 crc kubenswrapper[4860]: E1014 15:13:29.455158 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f848bea_23dc_4318_9677_ebbd4fe34a09.slice/crio-cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f848bea_23dc_4318_9677_ebbd4fe34a09.slice/crio-conmon-cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4.scope\": RecentStats: unable to find data in memory cache]" Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.770841 4860 generic.go:334] "Generic (PLEG): container finished" podID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerID="5d4f8f0f82d6fc7894a387ca38ca4cc8427e55704c03907f624d2a559b98dd92" exitCode=0 Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.770876 4860 generic.go:334] "Generic (PLEG): container finished" podID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerID="cf9e19032233da7146e6d9aa5110888e098e418f536a5e859c531a90761c6e80" exitCode=2 Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.770884 4860 generic.go:334] "Generic (PLEG): container finished" podID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerID="cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4" exitCode=0 Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.770902 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerDied","Data":"5d4f8f0f82d6fc7894a387ca38ca4cc8427e55704c03907f624d2a559b98dd92"} Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.770925 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerDied","Data":"cf9e19032233da7146e6d9aa5110888e098e418f536a5e859c531a90761c6e80"} Oct 14 15:13:29 crc kubenswrapper[4860]: I1014 15:13:29.770934 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerDied","Data":"cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4"} Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.161975 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.162042 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.192041 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.231582 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.794606 4860 generic.go:334] "Generic (PLEG): container finished" podID="5a10352d-79c7-44da-b182-2fe199712ddf" containerID="5bd2c94aa6876369db2ed742cf427a87fa3fa1744ca88600b9d113d24843952b" exitCode=0 Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.794737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a10352d-79c7-44da-b182-2fe199712ddf","Type":"ContainerDied","Data":"5bd2c94aa6876369db2ed742cf427a87fa3fa1744ca88600b9d113d24843952b"} Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.814365 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.987775 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wdb8w"] Oct 14 15:13:31 crc kubenswrapper[4860]: I1014 15:13:31.990361 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:31 crc kubenswrapper[4860]: W1014 15:13:31.996340 4860 reflector.go:561] object-"openstack"/"nova-cell1-manage-scripts": failed to list *v1.Secret: secrets "nova-cell1-manage-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 14 15:13:31 crc kubenswrapper[4860]: E1014 15:13:31.996389 4860 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-manage-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell1-manage-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:13:31 crc kubenswrapper[4860]: W1014 15:13:31.996453 4860 reflector.go:561] object-"openstack"/"nova-cell1-manage-config-data": failed to list *v1.Secret: secrets "nova-cell1-manage-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 14 15:13:31 crc kubenswrapper[4860]: E1014 15:13:31.996467 4860 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-manage-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell1-manage-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.022251 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wdb8w"] Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.031133 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.043503 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.043558 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.177273 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.177313 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.177475 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-config-data\") pod \"5a10352d-79c7-44da-b182-2fe199712ddf\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.177796 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a10352d-79c7-44da-b182-2fe199712ddf-logs\") pod \"5a10352d-79c7-44da-b182-2fe199712ddf\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.177973 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl4kx\" (UniqueName: \"kubernetes.io/projected/5a10352d-79c7-44da-b182-2fe199712ddf-kube-api-access-wl4kx\") pod \"5a10352d-79c7-44da-b182-2fe199712ddf\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.178058 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-combined-ca-bundle\") pod \"5a10352d-79c7-44da-b182-2fe199712ddf\" (UID: \"5a10352d-79c7-44da-b182-2fe199712ddf\") " Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.178373 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a10352d-79c7-44da-b182-2fe199712ddf-logs" (OuterVolumeSpecName: "logs") pod "5a10352d-79c7-44da-b182-2fe199712ddf" (UID: "5a10352d-79c7-44da-b182-2fe199712ddf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.178443 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cpj\" (UniqueName: \"kubernetes.io/projected/26fd2566-3969-4d61-9bf0-9944df693a16-kube-api-access-42cpj\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.178506 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-scripts\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.178785 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.178814 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-config-data\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.179000 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a10352d-79c7-44da-b182-2fe199712ddf-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.191319 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a10352d-79c7-44da-b182-2fe199712ddf-kube-api-access-wl4kx" (OuterVolumeSpecName: "kube-api-access-wl4kx") pod "5a10352d-79c7-44da-b182-2fe199712ddf" (UID: "5a10352d-79c7-44da-b182-2fe199712ddf"). InnerVolumeSpecName "kube-api-access-wl4kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.235177 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a10352d-79c7-44da-b182-2fe199712ddf" (UID: "5a10352d-79c7-44da-b182-2fe199712ddf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.243147 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-config-data" (OuterVolumeSpecName: "config-data") pod "5a10352d-79c7-44da-b182-2fe199712ddf" (UID: "5a10352d-79c7-44da-b182-2fe199712ddf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.280942 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cpj\" (UniqueName: \"kubernetes.io/projected/26fd2566-3969-4d61-9bf0-9944df693a16-kube-api-access-42cpj\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.281020 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-scripts\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.281207 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.281228 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-config-data\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.281323 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl4kx\" (UniqueName: \"kubernetes.io/projected/5a10352d-79c7-44da-b182-2fe199712ddf-kube-api-access-wl4kx\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.281340 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.281350 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a10352d-79c7-44da-b182-2fe199712ddf-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.284736 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.298896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cpj\" (UniqueName: \"kubernetes.io/projected/26fd2566-3969-4d61-9bf0-9944df693a16-kube-api-access-42cpj\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.807716 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.807701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a10352d-79c7-44da-b182-2fe199712ddf","Type":"ContainerDied","Data":"8a53b59a4d3961d320e26378c86e06f3c1df8595efb610f31e1cc1eb2291a2ff"} Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.807795 4860 scope.go:117] "RemoveContainer" containerID="5bd2c94aa6876369db2ed742cf427a87fa3fa1744ca88600b9d113d24843952b" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.869990 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.877189 4860 scope.go:117] "RemoveContainer" containerID="a94b6b5b5b86da099b5b12ffc576198382a1b74bca44a8a68e2e4fb6f1eaf888" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.882872 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.902353 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.902517 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:32 crc kubenswrapper[4860]: E1014 15:13:32.902884 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-log" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.902899 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-log" Oct 14 15:13:32 crc kubenswrapper[4860]: E1014 15:13:32.902918 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-api" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.902923 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-api" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.903150 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-api" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.903180 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" containerName="nova-api-log" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.906951 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.912593 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.914800 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.914817 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.915726 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.934809 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-config-data\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.996619 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-config-data\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.996719 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7gkc\" (UniqueName: \"kubernetes.io/projected/6a643628-4b73-4dfe-bb93-145dcf750ae6-kube-api-access-x7gkc\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.996767 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-public-tls-certs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.996895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643628-4b73-4dfe-bb93-145dcf750ae6-logs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.996928 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:32 crc kubenswrapper[4860]: I1014 15:13:32.997520 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.072758 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a10352d-79c7-44da-b182-2fe199712ddf" path="/var/lib/kubelet/pods/5a10352d-79c7-44da-b182-2fe199712ddf/volumes" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.089175 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.095311 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-scripts\") pod \"nova-cell1-cell-mapping-wdb8w\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.099315 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.099413 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-config-data\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.099457 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7gkc\" (UniqueName: \"kubernetes.io/projected/6a643628-4b73-4dfe-bb93-145dcf750ae6-kube-api-access-x7gkc\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.099487 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-public-tls-certs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.099536 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643628-4b73-4dfe-bb93-145dcf750ae6-logs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.099566 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.100794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643628-4b73-4dfe-bb93-145dcf750ae6-logs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.105250 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-public-tls-certs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.105594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.108539 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.109137 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-config-data\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.122978 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6gkdc" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:33 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:33 crc kubenswrapper[4860]: > Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.124850 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7gkc\" (UniqueName: \"kubernetes.io/projected/6a643628-4b73-4dfe-bb93-145dcf750ae6-kube-api-access-x7gkc\") pod \"nova-api-0\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.257061 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.277548 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.874288 4860 generic.go:334] "Generic (PLEG): container finished" podID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerID="d4fae14577e2a847b30a1a842fd9d826abb6290d24db28d038fd79af513eb671" exitCode=0 Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.874560 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerDied","Data":"d4fae14577e2a847b30a1a842fd9d826abb6290d24db28d038fd79af513eb671"} Oct 14 15:13:33 crc kubenswrapper[4860]: I1014 15:13:33.909671 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wdb8w"] Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.110929 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.349457 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461226 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-sg-core-conf-yaml\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461270 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rrc\" (UniqueName: \"kubernetes.io/projected/7f848bea-23dc-4318-9677-ebbd4fe34a09-kube-api-access-88rrc\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461321 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-scripts\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461367 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-run-httpd\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461439 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-log-httpd\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461458 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-config-data\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461524 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-combined-ca-bundle\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.461591 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-ceilometer-tls-certs\") pod \"7f848bea-23dc-4318-9677-ebbd4fe34a09\" (UID: \"7f848bea-23dc-4318-9677-ebbd4fe34a09\") " Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.462382 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.462640 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.466911 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-scripts" (OuterVolumeSpecName: "scripts") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.471557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f848bea-23dc-4318-9677-ebbd4fe34a09-kube-api-access-88rrc" (OuterVolumeSpecName: "kube-api-access-88rrc") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "kube-api-access-88rrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.536056 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.564721 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rrc\" (UniqueName: \"kubernetes.io/projected/7f848bea-23dc-4318-9677-ebbd4fe34a09-kube-api-access-88rrc\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.564822 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.564834 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.564843 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f848bea-23dc-4318-9677-ebbd4fe34a09-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.564852 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.585338 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.643912 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.648732 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-config-data" (OuterVolumeSpecName: "config-data") pod "7f848bea-23dc-4318-9677-ebbd4fe34a09" (UID: "7f848bea-23dc-4318-9677-ebbd4fe34a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.668397 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.668438 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.668452 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f848bea-23dc-4318-9677-ebbd4fe34a09-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.886288 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f848bea-23dc-4318-9677-ebbd4fe34a09","Type":"ContainerDied","Data":"a1ae726fdad9bd4c1a805c5a223bc9c2fdf8105e07e4a0e31b8b6c8d53088aed"} Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.886632 4860 scope.go:117] "RemoveContainer" containerID="5d4f8f0f82d6fc7894a387ca38ca4cc8427e55704c03907f624d2a559b98dd92" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.886766 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.891077 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wdb8w" event={"ID":"26fd2566-3969-4d61-9bf0-9944df693a16","Type":"ContainerStarted","Data":"c3e740fdc4bd5da9ab7bf4ed22df472ae6d7eefa0ff590e11d50f493c88d0647"} Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.891113 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wdb8w" event={"ID":"26fd2566-3969-4d61-9bf0-9944df693a16","Type":"ContainerStarted","Data":"f06a0a1e5a54e8923ade3dddbf5090162c5a21cc6b5a22c4c721b9f11be98e62"} Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.897588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a643628-4b73-4dfe-bb93-145dcf750ae6","Type":"ContainerStarted","Data":"d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02"} Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.897634 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a643628-4b73-4dfe-bb93-145dcf750ae6","Type":"ContainerStarted","Data":"bf8aba1e44e1e3623d43c710095d30f05ac374a17a5d841236815f9bd5a2bf89"} Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.897643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a643628-4b73-4dfe-bb93-145dcf750ae6","Type":"ContainerStarted","Data":"b1c557009d999134f89f9b3ad056b8769808718fbd8aef8680a2c781b1f50916"} Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.918752 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wdb8w" podStartSLOduration=3.918735454 podStartE2EDuration="3.918735454s" podCreationTimestamp="2025-10-14 15:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:34.909312066 +0000 UTC m=+1476.496095525" watchObservedRunningTime="2025-10-14 15:13:34.918735454 +0000 UTC m=+1476.505518903" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.929178 4860 scope.go:117] "RemoveContainer" containerID="cf9e19032233da7146e6d9aa5110888e098e418f536a5e859c531a90761c6e80" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.933255 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.951989 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.958193 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.958171168 podStartE2EDuration="2.958171168s" podCreationTimestamp="2025-10-14 15:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:34.952316436 +0000 UTC m=+1476.539099885" watchObservedRunningTime="2025-10-14 15:13:34.958171168 +0000 UTC m=+1476.544954607" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.965002 4860 scope.go:117] "RemoveContainer" containerID="d4fae14577e2a847b30a1a842fd9d826abb6290d24db28d038fd79af513eb671" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.977392 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:34 crc kubenswrapper[4860]: E1014 15:13:34.977828 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-notification-agent" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.977847 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-notification-agent" Oct 14 15:13:34 crc kubenswrapper[4860]: E1014 15:13:34.977865 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="proxy-httpd" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.977872 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="proxy-httpd" Oct 14 15:13:34 crc kubenswrapper[4860]: E1014 15:13:34.977883 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-central-agent" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.977889 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-central-agent" Oct 14 15:13:34 crc kubenswrapper[4860]: E1014 15:13:34.977908 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="sg-core" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.977914 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="sg-core" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.978613 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-central-agent" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.978633 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="proxy-httpd" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.978641 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="ceilometer-notification-agent" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.978660 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" containerName="sg-core" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.980888 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.983786 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.984135 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.984254 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 15:13:34 crc kubenswrapper[4860]: I1014 15:13:34.999261 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.008329 4860 scope.go:117] "RemoveContainer" containerID="cf7c1f3e1460b3b61e0fff38b5a09d90ab2468bc270508e7615b638d0ef789c4" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.072755 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f848bea-23dc-4318-9677-ebbd4fe34a09" path="/var/lib/kubelet/pods/7f848bea-23dc-4318-9677-ebbd4fe34a09/volumes" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.075724 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-log-httpd\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076161 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076291 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-run-httpd\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076341 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076381 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076432 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dznn\" (UniqueName: \"kubernetes.io/projected/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-kube-api-access-9dznn\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076475 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-scripts\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.076517 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-config-data\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.178544 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-run-httpd\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.178833 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.178934 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.179057 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dznn\" (UniqueName: \"kubernetes.io/projected/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-kube-api-access-9dznn\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.179182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-scripts\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.179306 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-config-data\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.179418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-log-httpd\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.179542 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.183879 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.184222 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-run-httpd\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.186353 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-log-httpd\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.190923 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-scripts\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.192504 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-config-data\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.196753 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.199459 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.222741 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dznn\" (UniqueName: \"kubernetes.io/projected/0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee-kube-api-access-9dznn\") pod \"ceilometer-0\" (UID: \"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee\") " pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.329243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.462951 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.546959 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.644273 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.709953 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kcm5k"] Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.710351 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerName="dnsmasq-dns" containerID="cri-o://20c33a589d90cf004180ea32861dc77fec10ea1b083bf4b0ab8dc6fc7440e913" gracePeriod=10 Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.742359 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g2sn"] Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.839544 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.912179 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee","Type":"ContainerStarted","Data":"d296c566abe529ad4638850e4838aa85f7c698199102c852e542b8a2a236c920"} Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.913981 4860 generic.go:334] "Generic (PLEG): container finished" podID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerID="20c33a589d90cf004180ea32861dc77fec10ea1b083bf4b0ab8dc6fc7440e913" exitCode=0 Oct 14 15:13:35 crc kubenswrapper[4860]: I1014 15:13:35.915089 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" event={"ID":"0fc82d3c-afa8-4c2a-9e49-531b56497332","Type":"ContainerDied","Data":"20c33a589d90cf004180ea32861dc77fec10ea1b083bf4b0ab8dc6fc7440e913"} Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.254307 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.312246 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-nb\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.312432 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqwl8\" (UniqueName: \"kubernetes.io/projected/0fc82d3c-afa8-4c2a-9e49-531b56497332-kube-api-access-dqwl8\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.312473 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-svc\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.312492 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.312534 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-config\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.312593 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-sb\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.324632 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc82d3c-afa8-4c2a-9e49-531b56497332-kube-api-access-dqwl8" (OuterVolumeSpecName: "kube-api-access-dqwl8") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332"). InnerVolumeSpecName "kube-api-access-dqwl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.388771 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-config" (OuterVolumeSpecName: "config") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.394098 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.415704 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqwl8\" (UniqueName: \"kubernetes.io/projected/0fc82d3c-afa8-4c2a-9e49-531b56497332-kube-api-access-dqwl8\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.415735 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.415744 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.448395 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:13:36 crc kubenswrapper[4860]: E1014 15:13:36.449777 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0 podName:0fc82d3c-afa8-4c2a-9e49-531b56497332 nodeName:}" failed. No retries permitted until 2025-10-14 15:13:36.949741669 +0000 UTC m=+1478.536525118 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332") : error deleting /var/lib/kubelet/pods/0fc82d3c-afa8-4c2a-9e49-531b56497332/volume-subpaths: remove /var/lib/kubelet/pods/0fc82d3c-afa8-4c2a-9e49-531b56497332/volume-subpaths: no such file or directory Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.450357 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.518495 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.518521 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.925266 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee","Type":"ContainerStarted","Data":"e712f4feec96d2768e41645a1f281e3a9426df1f0e2c41b29b08f425fbbcb87b"} Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.927546 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" event={"ID":"0fc82d3c-afa8-4c2a-9e49-531b56497332","Type":"ContainerDied","Data":"c23716352a7abd96113f39bb7a7d705cf2ce1121c6205056fd385856d24b9695"} Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.927609 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kcm5k" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.927614 4860 scope.go:117] "RemoveContainer" containerID="20c33a589d90cf004180ea32861dc77fec10ea1b083bf4b0ab8dc6fc7440e913" Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.927681 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9g2sn" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="registry-server" containerID="cri-o://b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c" gracePeriod=2 Oct 14 15:13:36 crc kubenswrapper[4860]: I1014 15:13:36.955402 4860 scope.go:117] "RemoveContainer" containerID="4ad272f4e561fdf43ca32ae9550a8d6c56af7658ba6b7956199a896f4a27fb59" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.026739 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0\") pod \"0fc82d3c-afa8-4c2a-9e49-531b56497332\" (UID: \"0fc82d3c-afa8-4c2a-9e49-531b56497332\") " Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.027468 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0fc82d3c-afa8-4c2a-9e49-531b56497332" (UID: "0fc82d3c-afa8-4c2a-9e49-531b56497332"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.129366 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc82d3c-afa8-4c2a-9e49-531b56497332-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.166100 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shll4" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:37 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:37 crc kubenswrapper[4860]: > Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.263969 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kcm5k"] Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.277818 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kcm5k"] Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.525010 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.638189 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pqhx\" (UniqueName: \"kubernetes.io/projected/2229cb3a-9254-47e8-8006-715670fb974e-kube-api-access-9pqhx\") pod \"2229cb3a-9254-47e8-8006-715670fb974e\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.639263 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-utilities\") pod \"2229cb3a-9254-47e8-8006-715670fb974e\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.639512 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-catalog-content\") pod \"2229cb3a-9254-47e8-8006-715670fb974e\" (UID: \"2229cb3a-9254-47e8-8006-715670fb974e\") " Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.639868 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-utilities" (OuterVolumeSpecName: "utilities") pod "2229cb3a-9254-47e8-8006-715670fb974e" (UID: "2229cb3a-9254-47e8-8006-715670fb974e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.640550 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.647695 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2229cb3a-9254-47e8-8006-715670fb974e-kube-api-access-9pqhx" (OuterVolumeSpecName: "kube-api-access-9pqhx") pod "2229cb3a-9254-47e8-8006-715670fb974e" (UID: "2229cb3a-9254-47e8-8006-715670fb974e"). InnerVolumeSpecName "kube-api-access-9pqhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.664056 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2229cb3a-9254-47e8-8006-715670fb974e" (UID: "2229cb3a-9254-47e8-8006-715670fb974e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.742267 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pqhx\" (UniqueName: \"kubernetes.io/projected/2229cb3a-9254-47e8-8006-715670fb974e-kube-api-access-9pqhx\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.742890 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2229cb3a-9254-47e8-8006-715670fb974e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.940451 4860 generic.go:334] "Generic (PLEG): container finished" podID="2229cb3a-9254-47e8-8006-715670fb974e" containerID="b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c" exitCode=0 Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.940517 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g2sn" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.940540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerDied","Data":"b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c"} Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.941668 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g2sn" event={"ID":"2229cb3a-9254-47e8-8006-715670fb974e","Type":"ContainerDied","Data":"b3d7f0674243632a0c28a5f1cc1a1a9a9bcb8488739a73e64f89689d26d5100e"} Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.941700 4860 scope.go:117] "RemoveContainer" containerID="b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c" Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.943180 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee","Type":"ContainerStarted","Data":"eb8e85dd3abdaff6f8bb054b4d50a2dfb5f5c47f5e45cf75e0f515ac23742c70"} Oct 14 15:13:37 crc kubenswrapper[4860]: I1014 15:13:37.993390 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g2sn"] Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.000893 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g2sn"] Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.012816 4860 scope.go:117] "RemoveContainer" containerID="e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.044682 4860 scope.go:117] "RemoveContainer" containerID="0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.087626 4860 scope.go:117] "RemoveContainer" containerID="b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c" Oct 14 15:13:38 crc kubenswrapper[4860]: E1014 15:13:38.088150 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c\": container with ID starting with b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c not found: ID does not exist" containerID="b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.088197 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c"} err="failed to get container status \"b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c\": rpc error: code = NotFound desc = could not find container \"b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c\": container with ID starting with b329a63171a4c5b419b00dc331160ce7e3fae43bcc8bc34776e8aaa38fddb07c not found: ID does not exist" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.088225 4860 scope.go:117] "RemoveContainer" containerID="e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31" Oct 14 15:13:38 crc kubenswrapper[4860]: E1014 15:13:38.088578 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31\": container with ID starting with e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31 not found: ID does not exist" containerID="e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.088614 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31"} err="failed to get container status \"e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31\": rpc error: code = NotFound desc = could not find container \"e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31\": container with ID starting with e54fc15e1f819958a10ded658d529f15cb06538bd0a25d53e071dd9de2b17f31 not found: ID does not exist" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.088633 4860 scope.go:117] "RemoveContainer" containerID="0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a" Oct 14 15:13:38 crc kubenswrapper[4860]: E1014 15:13:38.088926 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a\": container with ID starting with 0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a not found: ID does not exist" containerID="0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.088953 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a"} err="failed to get container status \"0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a\": rpc error: code = NotFound desc = could not find container \"0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a\": container with ID starting with 0e3a071c7ee8eb6ca208ce634e05441e99870476f7f39d3a735100444bd1bd7a not found: ID does not exist" Oct 14 15:13:38 crc kubenswrapper[4860]: I1014 15:13:38.958399 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee","Type":"ContainerStarted","Data":"8d603c8469d410f80741280b0867d0fcf704c0838d49282ed545e1d21a444bce"} Oct 14 15:13:39 crc kubenswrapper[4860]: I1014 15:13:39.072830 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" path="/var/lib/kubelet/pods/0fc82d3c-afa8-4c2a-9e49-531b56497332/volumes" Oct 14 15:13:39 crc kubenswrapper[4860]: I1014 15:13:39.073444 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2229cb3a-9254-47e8-8006-715670fb974e" path="/var/lib/kubelet/pods/2229cb3a-9254-47e8-8006-715670fb974e/volumes" Oct 14 15:13:41 crc kubenswrapper[4860]: I1014 15:13:41.047101 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee","Type":"ContainerStarted","Data":"c429b7f4ffc1870fa790247b6d35312417c2c510c116f66a327c498c1fab4cb8"} Oct 14 15:13:41 crc kubenswrapper[4860]: I1014 15:13:41.047798 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 15:13:41 crc kubenswrapper[4860]: I1014 15:13:41.088255 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.149644894 podStartE2EDuration="7.088227453s" podCreationTimestamp="2025-10-14 15:13:34 +0000 UTC" firstStartedPulling="2025-10-14 15:13:35.835992311 +0000 UTC m=+1477.422775760" lastFinishedPulling="2025-10-14 15:13:40.77457487 +0000 UTC m=+1482.361358319" observedRunningTime="2025-10-14 15:13:41.072976255 +0000 UTC m=+1482.659759704" watchObservedRunningTime="2025-10-14 15:13:41.088227453 +0000 UTC m=+1482.675010902" Oct 14 15:13:41 crc kubenswrapper[4860]: I1014 15:13:41.170234 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 15:13:41 crc kubenswrapper[4860]: I1014 15:13:41.179456 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 15:13:41 crc kubenswrapper[4860]: I1014 15:13:41.183724 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 15:13:42 crc kubenswrapper[4860]: I1014 15:13:42.055682 4860 generic.go:334] "Generic (PLEG): container finished" podID="26fd2566-3969-4d61-9bf0-9944df693a16" containerID="c3e740fdc4bd5da9ab7bf4ed22df472ae6d7eefa0ff590e11d50f493c88d0647" exitCode=0 Oct 14 15:13:42 crc kubenswrapper[4860]: I1014 15:13:42.055767 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wdb8w" event={"ID":"26fd2566-3969-4d61-9bf0-9944df693a16","Type":"ContainerDied","Data":"c3e740fdc4bd5da9ab7bf4ed22df472ae6d7eefa0ff590e11d50f493c88d0647"} Oct 14 15:13:42 crc kubenswrapper[4860]: I1014 15:13:42.074761 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.112202 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6gkdc" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="registry-server" probeResult="failure" output=< Oct 14 15:13:43 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:13:43 crc kubenswrapper[4860]: > Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.278455 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.278745 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.457175 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.562535 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42cpj\" (UniqueName: \"kubernetes.io/projected/26fd2566-3969-4d61-9bf0-9944df693a16-kube-api-access-42cpj\") pod \"26fd2566-3969-4d61-9bf0-9944df693a16\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.562732 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-config-data\") pod \"26fd2566-3969-4d61-9bf0-9944df693a16\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.562769 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-combined-ca-bundle\") pod \"26fd2566-3969-4d61-9bf0-9944df693a16\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.562799 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-scripts\") pod \"26fd2566-3969-4d61-9bf0-9944df693a16\" (UID: \"26fd2566-3969-4d61-9bf0-9944df693a16\") " Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.572161 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-scripts" (OuterVolumeSpecName: "scripts") pod "26fd2566-3969-4d61-9bf0-9944df693a16" (UID: "26fd2566-3969-4d61-9bf0-9944df693a16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.575491 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fd2566-3969-4d61-9bf0-9944df693a16-kube-api-access-42cpj" (OuterVolumeSpecName: "kube-api-access-42cpj") pod "26fd2566-3969-4d61-9bf0-9944df693a16" (UID: "26fd2566-3969-4d61-9bf0-9944df693a16"). InnerVolumeSpecName "kube-api-access-42cpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.593263 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26fd2566-3969-4d61-9bf0-9944df693a16" (UID: "26fd2566-3969-4d61-9bf0-9944df693a16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.607603 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-config-data" (OuterVolumeSpecName: "config-data") pod "26fd2566-3969-4d61-9bf0-9944df693a16" (UID: "26fd2566-3969-4d61-9bf0-9944df693a16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.665067 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.665105 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.665129 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fd2566-3969-4d61-9bf0-9944df693a16-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:43 crc kubenswrapper[4860]: I1014 15:13:43.665140 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42cpj\" (UniqueName: \"kubernetes.io/projected/26fd2566-3969-4d61-9bf0-9944df693a16-kube-api-access-42cpj\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.080663 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wdb8w" event={"ID":"26fd2566-3969-4d61-9bf0-9944df693a16","Type":"ContainerDied","Data":"f06a0a1e5a54e8923ade3dddbf5090162c5a21cc6b5a22c4c721b9f11be98e62"} Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.080968 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06a0a1e5a54e8923ade3dddbf5090162c5a21cc6b5a22c4c721b9f11be98e62" Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.080716 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wdb8w" Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.288203 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.288424 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-log" containerID="cri-o://bf8aba1e44e1e3623d43c710095d30f05ac374a17a5d841236815f9bd5a2bf89" gracePeriod=30 Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.288601 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-api" containerID="cri-o://d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02" gracePeriod=30 Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.302325 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.302574 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" containerName="nova-scheduler-scheduler" containerID="cri-o://5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" gracePeriod=30 Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.306232 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.306329 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:13:44 crc kubenswrapper[4860]: I1014 15:13:44.310330 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:45 crc kubenswrapper[4860]: I1014 15:13:45.091300 4860 generic.go:334] "Generic (PLEG): container finished" podID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerID="bf8aba1e44e1e3623d43c710095d30f05ac374a17a5d841236815f9bd5a2bf89" exitCode=143 Oct 14 15:13:45 crc kubenswrapper[4860]: I1014 15:13:45.091377 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a643628-4b73-4dfe-bb93-145dcf750ae6","Type":"ContainerDied","Data":"bf8aba1e44e1e3623d43c710095d30f05ac374a17a5d841236815f9bd5a2bf89"} Oct 14 15:13:45 crc kubenswrapper[4860]: I1014 15:13:45.091482 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-log" containerID="cri-o://d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec" gracePeriod=30 Oct 14 15:13:45 crc kubenswrapper[4860]: I1014 15:13:45.091829 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-metadata" containerID="cri-o://1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa" gracePeriod=30 Oct 14 15:13:46 crc kubenswrapper[4860]: I1014 15:13:46.101426 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eb99e52e-496c-48e1-a66b-71eb52b04370","Type":"ContainerDied","Data":"d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec"} Oct 14 15:13:46 crc kubenswrapper[4860]: I1014 15:13:46.101512 4860 generic.go:334] "Generic (PLEG): container finished" podID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerID="d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec" exitCode=143 Oct 14 15:13:46 crc kubenswrapper[4860]: I1014 15:13:46.174720 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:13:46 crc kubenswrapper[4860]: I1014 15:13:46.242427 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:13:46 crc kubenswrapper[4860]: I1014 15:13:46.411438 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shll4"] Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.119799 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-shll4" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" containerID="cri-o://2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46" gracePeriod=2 Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.252127 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:41534->10.217.0.201:8775: read: connection reset by peer" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.252169 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:41522->10.217.0.201:8775: read: connection reset by peer" Oct 14 15:13:48 crc kubenswrapper[4860]: E1014 15:13:48.537367 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 15:13:48 crc kubenswrapper[4860]: E1014 15:13:48.542275 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 15:13:48 crc kubenswrapper[4860]: E1014 15:13:48.543525 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 15:13:48 crc kubenswrapper[4860]: E1014 15:13:48.543586 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" containerName="nova-scheduler-scheduler" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.574181 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.697881 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.774761 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-catalog-content\") pod \"59dce401-ce86-4798-a1ef-6a520c406f54\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.774923 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgt2\" (UniqueName: \"kubernetes.io/projected/59dce401-ce86-4798-a1ef-6a520c406f54-kube-api-access-glgt2\") pod \"59dce401-ce86-4798-a1ef-6a520c406f54\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.774996 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-utilities\") pod \"59dce401-ce86-4798-a1ef-6a520c406f54\" (UID: \"59dce401-ce86-4798-a1ef-6a520c406f54\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.779252 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-utilities" (OuterVolumeSpecName: "utilities") pod "59dce401-ce86-4798-a1ef-6a520c406f54" (UID: "59dce401-ce86-4798-a1ef-6a520c406f54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.782619 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59dce401-ce86-4798-a1ef-6a520c406f54-kube-api-access-glgt2" (OuterVolumeSpecName: "kube-api-access-glgt2") pod "59dce401-ce86-4798-a1ef-6a520c406f54" (UID: "59dce401-ce86-4798-a1ef-6a520c406f54"). InnerVolumeSpecName "kube-api-access-glgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.875045 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59dce401-ce86-4798-a1ef-6a520c406f54" (UID: "59dce401-ce86-4798-a1ef-6a520c406f54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.876502 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb99e52e-496c-48e1-a66b-71eb52b04370-logs\") pod \"eb99e52e-496c-48e1-a66b-71eb52b04370\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.876532 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-nova-metadata-tls-certs\") pod \"eb99e52e-496c-48e1-a66b-71eb52b04370\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.876553 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-config-data\") pod \"eb99e52e-496c-48e1-a66b-71eb52b04370\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.876688 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78rwq\" (UniqueName: \"kubernetes.io/projected/eb99e52e-496c-48e1-a66b-71eb52b04370-kube-api-access-78rwq\") pod \"eb99e52e-496c-48e1-a66b-71eb52b04370\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.876769 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-combined-ca-bundle\") pod \"eb99e52e-496c-48e1-a66b-71eb52b04370\" (UID: \"eb99e52e-496c-48e1-a66b-71eb52b04370\") " Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.877130 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb99e52e-496c-48e1-a66b-71eb52b04370-logs" (OuterVolumeSpecName: "logs") pod "eb99e52e-496c-48e1-a66b-71eb52b04370" (UID: "eb99e52e-496c-48e1-a66b-71eb52b04370"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.877149 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgt2\" (UniqueName: \"kubernetes.io/projected/59dce401-ce86-4798-a1ef-6a520c406f54-kube-api-access-glgt2\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.877193 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.877206 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59dce401-ce86-4798-a1ef-6a520c406f54-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.882467 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb99e52e-496c-48e1-a66b-71eb52b04370-kube-api-access-78rwq" (OuterVolumeSpecName: "kube-api-access-78rwq") pod "eb99e52e-496c-48e1-a66b-71eb52b04370" (UID: "eb99e52e-496c-48e1-a66b-71eb52b04370"). InnerVolumeSpecName "kube-api-access-78rwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.902662 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-config-data" (OuterVolumeSpecName: "config-data") pod "eb99e52e-496c-48e1-a66b-71eb52b04370" (UID: "eb99e52e-496c-48e1-a66b-71eb52b04370"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.911061 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb99e52e-496c-48e1-a66b-71eb52b04370" (UID: "eb99e52e-496c-48e1-a66b-71eb52b04370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.930965 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "eb99e52e-496c-48e1-a66b-71eb52b04370" (UID: "eb99e52e-496c-48e1-a66b-71eb52b04370"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.978835 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.978875 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb99e52e-496c-48e1-a66b-71eb52b04370-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.978887 4860 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.978899 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb99e52e-496c-48e1-a66b-71eb52b04370-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:48 crc kubenswrapper[4860]: I1014 15:13:48.978911 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78rwq\" (UniqueName: \"kubernetes.io/projected/eb99e52e-496c-48e1-a66b-71eb52b04370-kube-api-access-78rwq\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.141174 4860 generic.go:334] "Generic (PLEG): container finished" podID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerID="1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa" exitCode=0 Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.141227 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eb99e52e-496c-48e1-a66b-71eb52b04370","Type":"ContainerDied","Data":"1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa"} Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.141258 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eb99e52e-496c-48e1-a66b-71eb52b04370","Type":"ContainerDied","Data":"e0724c063fcb7c82c728ec3718e5eeab219e0ceb74b12a29c2c0e58860cdff3e"} Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.141275 4860 scope.go:117] "RemoveContainer" containerID="1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.141390 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.152665 4860 generic.go:334] "Generic (PLEG): container finished" podID="59dce401-ce86-4798-a1ef-6a520c406f54" containerID="2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46" exitCode=0 Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.152706 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerDied","Data":"2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46"} Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.152729 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shll4" event={"ID":"59dce401-ce86-4798-a1ef-6a520c406f54","Type":"ContainerDied","Data":"f09be32998cd1ebd4762c34e5738ffee79df3f890d7df52a3916dced89a0f9ae"} Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.152784 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shll4" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.175491 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.176825 4860 scope.go:117] "RemoveContainer" containerID="d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.195812 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.215474 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shll4"] Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.222055 4860 scope.go:117] "RemoveContainer" containerID="1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.224076 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa\": container with ID starting with 1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa not found: ID does not exist" containerID="1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.224124 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa"} err="failed to get container status \"1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa\": rpc error: code = NotFound desc = could not find container \"1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa\": container with ID starting with 1ceac79effdc50843871e1ded540ffa0461ed5d148698bb5f5bac0899f9023aa not found: ID does not exist" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.224153 4860 scope.go:117] "RemoveContainer" containerID="d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.225552 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec\": container with ID starting with d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec not found: ID does not exist" containerID="d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.225589 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec"} err="failed to get container status \"d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec\": rpc error: code = NotFound desc = could not find container \"d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec\": container with ID starting with d4bd77ff227868e7bdbe067406587d3573ae92632207860815c4001f7ed831ec not found: ID does not exist" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.225615 4860 scope.go:117] "RemoveContainer" containerID="2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.227163 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shll4"] Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.236550 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.236942 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="extract-utilities" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.236960 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="extract-utilities" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.236976 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="extract-content" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.236982 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="extract-content" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.236999 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fd2566-3969-4d61-9bf0-9944df693a16" containerName="nova-manage" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237005 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fd2566-3969-4d61-9bf0-9944df693a16" containerName="nova-manage" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237017 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237060 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237078 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="extract-content" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237085 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="extract-content" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237100 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="registry-server" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237105 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="registry-server" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237122 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-metadata" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237128 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-metadata" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237138 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="extract-utilities" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237144 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="extract-utilities" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237156 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerName="init" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237161 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerName="init" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237170 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-log" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237176 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-log" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.237187 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerName="dnsmasq-dns" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237193 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerName="dnsmasq-dns" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237373 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2229cb3a-9254-47e8-8006-715670fb974e" containerName="registry-server" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237389 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-log" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237400 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc82d3c-afa8-4c2a-9e49-531b56497332" containerName="dnsmasq-dns" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237412 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" containerName="registry-server" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237418 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fd2566-3969-4d61-9bf0-9944df693a16" containerName="nova-manage" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.237427 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" containerName="nova-metadata-metadata" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.238428 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.244887 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.248260 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.248343 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.390091 4860 scope.go:117] "RemoveContainer" containerID="1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.391270 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.391386 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-config-data\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.391416 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.391465 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-logs\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.391545 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jh2s\" (UniqueName: \"kubernetes.io/projected/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-kube-api-access-7jh2s\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.422558 4860 scope.go:117] "RemoveContainer" containerID="f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.470634 4860 scope.go:117] "RemoveContainer" containerID="2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.472089 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46\": container with ID starting with 2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46 not found: ID does not exist" containerID="2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.472115 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46"} err="failed to get container status \"2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46\": rpc error: code = NotFound desc = could not find container \"2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46\": container with ID starting with 2900df71eef6894d2199613ee76e42184b5493d4c13e167cafc3720dda7aee46 not found: ID does not exist" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.472133 4860 scope.go:117] "RemoveContainer" containerID="1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.476334 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7\": container with ID starting with 1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7 not found: ID does not exist" containerID="1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.476390 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7"} err="failed to get container status \"1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7\": rpc error: code = NotFound desc = could not find container \"1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7\": container with ID starting with 1fe13852a6f0169ba1dd1eaba4225d6c1ab662595be0b450c648881a9f67abe7 not found: ID does not exist" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.476422 4860 scope.go:117] "RemoveContainer" containerID="f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11" Oct 14 15:13:49 crc kubenswrapper[4860]: E1014 15:13:49.476713 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11\": container with ID starting with f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11 not found: ID does not exist" containerID="f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.476734 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11"} err="failed to get container status \"f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11\": rpc error: code = NotFound desc = could not find container \"f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11\": container with ID starting with f1fecd3441feb1915795ce464fe9cb169b9fbd5faa6578d4788a8061375bdb11 not found: ID does not exist" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.493587 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-logs\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.493683 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jh2s\" (UniqueName: \"kubernetes.io/projected/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-kube-api-access-7jh2s\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.493772 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.493856 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-config-data\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.493879 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.494177 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-logs\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.499646 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.509795 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jh2s\" (UniqueName: \"kubernetes.io/projected/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-kube-api-access-7jh2s\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.510250 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-config-data\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.511222 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5\") " pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.647242 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.688486 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.796848 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-config-data\") pod \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.797227 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpbv9\" (UniqueName: \"kubernetes.io/projected/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-kube-api-access-wpbv9\") pod \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.797308 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-combined-ca-bundle\") pod \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\" (UID: \"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1\") " Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.806681 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-kube-api-access-wpbv9" (OuterVolumeSpecName: "kube-api-access-wpbv9") pod "1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" (UID: "1d87ee81-5c67-46c2-93c9-46e2d2cea3d1"). InnerVolumeSpecName "kube-api-access-wpbv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.836499 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" (UID: "1d87ee81-5c67-46c2-93c9-46e2d2cea3d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.844484 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-config-data" (OuterVolumeSpecName: "config-data") pod "1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" (UID: "1d87ee81-5c67-46c2-93c9-46e2d2cea3d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.901138 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpbv9\" (UniqueName: \"kubernetes.io/projected/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-kube-api-access-wpbv9\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.901194 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:49 crc kubenswrapper[4860]: I1014 15:13:49.901206 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: E1014 15:13:50.065925 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a643628_4b73_4dfe_bb93_145dcf750ae6.slice/crio-d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a643628_4b73_4dfe_bb93_145dcf750ae6.slice/crio-conmon-d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02.scope\": RecentStats: unable to find data in memory cache]" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.169536 4860 generic.go:334] "Generic (PLEG): container finished" podID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" exitCode=0 Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.169806 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1","Type":"ContainerDied","Data":"5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20"} Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.169831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d87ee81-5c67-46c2-93c9-46e2d2cea3d1","Type":"ContainerDied","Data":"6b28eacfe603b5e2cb7e1a0c692203244ac263ec058f3551c51bbaf023fd5506"} Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.169847 4860 scope.go:117] "RemoveContainer" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.169896 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.173412 4860 generic.go:334] "Generic (PLEG): container finished" podID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerID="d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02" exitCode=0 Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.173459 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a643628-4b73-4dfe-bb93-145dcf750ae6","Type":"ContainerDied","Data":"d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02"} Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.174696 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.195558 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.204512 4860 scope.go:117] "RemoveContainer" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" Oct 14 15:13:50 crc kubenswrapper[4860]: E1014 15:13:50.205018 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20\": container with ID starting with 5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20 not found: ID does not exist" containerID="5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.205063 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20"} err="failed to get container status \"5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20\": rpc error: code = NotFound desc = could not find container \"5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20\": container with ID starting with 5c46dbf85ba527c4e84f5b2b9f268b48e6a4634950727f3563a9f8fc67608c20 not found: ID does not exist" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.243069 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.277874 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.293210 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:50 crc kubenswrapper[4860]: E1014 15:13:50.293776 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" containerName="nova-scheduler-scheduler" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.293797 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" containerName="nova-scheduler-scheduler" Oct 14 15:13:50 crc kubenswrapper[4860]: E1014 15:13:50.293826 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-api" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.293834 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-api" Oct 14 15:13:50 crc kubenswrapper[4860]: E1014 15:13:50.293844 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-log" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.293851 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-log" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.294115 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" containerName="nova-scheduler-scheduler" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.294129 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-api" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.294138 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" containerName="nova-api-log" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.294921 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.300743 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.310138 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-public-tls-certs\") pod \"6a643628-4b73-4dfe-bb93-145dcf750ae6\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.310307 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-internal-tls-certs\") pod \"6a643628-4b73-4dfe-bb93-145dcf750ae6\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.310361 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643628-4b73-4dfe-bb93-145dcf750ae6-logs\") pod \"6a643628-4b73-4dfe-bb93-145dcf750ae6\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.310437 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-config-data\") pod \"6a643628-4b73-4dfe-bb93-145dcf750ae6\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.310547 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-combined-ca-bundle\") pod \"6a643628-4b73-4dfe-bb93-145dcf750ae6\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.310597 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7gkc\" (UniqueName: \"kubernetes.io/projected/6a643628-4b73-4dfe-bb93-145dcf750ae6-kube-api-access-x7gkc\") pod \"6a643628-4b73-4dfe-bb93-145dcf750ae6\" (UID: \"6a643628-4b73-4dfe-bb93-145dcf750ae6\") " Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.312003 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a643628-4b73-4dfe-bb93-145dcf750ae6-logs" (OuterVolumeSpecName: "logs") pod "6a643628-4b73-4dfe-bb93-145dcf750ae6" (UID: "6a643628-4b73-4dfe-bb93-145dcf750ae6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.318132 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.318985 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a643628-4b73-4dfe-bb93-145dcf750ae6-kube-api-access-x7gkc" (OuterVolumeSpecName: "kube-api-access-x7gkc") pod "6a643628-4b73-4dfe-bb93-145dcf750ae6" (UID: "6a643628-4b73-4dfe-bb93-145dcf750ae6"). InnerVolumeSpecName "kube-api-access-x7gkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.344020 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-config-data" (OuterVolumeSpecName: "config-data") pod "6a643628-4b73-4dfe-bb93-145dcf750ae6" (UID: "6a643628-4b73-4dfe-bb93-145dcf750ae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.370001 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a643628-4b73-4dfe-bb93-145dcf750ae6" (UID: "6a643628-4b73-4dfe-bb93-145dcf750ae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.389349 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6a643628-4b73-4dfe-bb93-145dcf750ae6" (UID: "6a643628-4b73-4dfe-bb93-145dcf750ae6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.397847 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6a643628-4b73-4dfe-bb93-145dcf750ae6" (UID: "6a643628-4b73-4dfe-bb93-145dcf750ae6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412213 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edadf2e8-459f-4994-a1f8-a059cbdb46c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412606 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqp2d\" (UniqueName: \"kubernetes.io/projected/edadf2e8-459f-4994-a1f8-a059cbdb46c6-kube-api-access-sqp2d\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412693 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edadf2e8-459f-4994-a1f8-a059cbdb46c6-config-data\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412765 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412778 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7gkc\" (UniqueName: \"kubernetes.io/projected/6a643628-4b73-4dfe-bb93-145dcf750ae6-kube-api-access-x7gkc\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412788 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412797 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412805 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643628-4b73-4dfe-bb93-145dcf750ae6-logs\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.412814 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643628-4b73-4dfe-bb93-145dcf750ae6-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.515228 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edadf2e8-459f-4994-a1f8-a059cbdb46c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.515365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqp2d\" (UniqueName: \"kubernetes.io/projected/edadf2e8-459f-4994-a1f8-a059cbdb46c6-kube-api-access-sqp2d\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.515752 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edadf2e8-459f-4994-a1f8-a059cbdb46c6-config-data\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.520150 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edadf2e8-459f-4994-a1f8-a059cbdb46c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.521631 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edadf2e8-459f-4994-a1f8-a059cbdb46c6-config-data\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.532373 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqp2d\" (UniqueName: \"kubernetes.io/projected/edadf2e8-459f-4994-a1f8-a059cbdb46c6-kube-api-access-sqp2d\") pod \"nova-scheduler-0\" (UID: \"edadf2e8-459f-4994-a1f8-a059cbdb46c6\") " pod="openstack/nova-scheduler-0" Oct 14 15:13:50 crc kubenswrapper[4860]: I1014 15:13:50.764611 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.074250 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d87ee81-5c67-46c2-93c9-46e2d2cea3d1" path="/var/lib/kubelet/pods/1d87ee81-5c67-46c2-93c9-46e2d2cea3d1/volumes" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.075257 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59dce401-ce86-4798-a1ef-6a520c406f54" path="/var/lib/kubelet/pods/59dce401-ce86-4798-a1ef-6a520c406f54/volumes" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.076938 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb99e52e-496c-48e1-a66b-71eb52b04370" path="/var/lib/kubelet/pods/eb99e52e-496c-48e1-a66b-71eb52b04370/volumes" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.185761 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5","Type":"ContainerStarted","Data":"627f6f2a0c73f12463b1cab7315adddbeffc7d96edb222e5958dae269097aae1"} Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.186817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5","Type":"ContainerStarted","Data":"2f0775ca2892c94161ae44deb0a155b12ff50786c07ba3ddcdc68e4f5273d067"} Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.186883 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5","Type":"ContainerStarted","Data":"35c55572fc2d1326c2a99aa0e56b75ec8bc7d8c03b9ac953e1b1dc03e9e82975"} Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.189097 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a643628-4b73-4dfe-bb93-145dcf750ae6","Type":"ContainerDied","Data":"b1c557009d999134f89f9b3ad056b8769808718fbd8aef8680a2c781b1f50916"} Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.189628 4860 scope.go:117] "RemoveContainer" containerID="d860c018ed80b629c79f48879a26738eb902528b135c477a8703f7f2bd9f1f02" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.189849 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.240541 4860 scope.go:117] "RemoveContainer" containerID="bf8aba1e44e1e3623d43c710095d30f05ac374a17a5d841236815f9bd5a2bf89" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.261423 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2613968 podStartE2EDuration="2.2613968s" podCreationTimestamp="2025-10-14 15:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:51.2347392 +0000 UTC m=+1492.821522649" watchObservedRunningTime="2025-10-14 15:13:51.2613968 +0000 UTC m=+1492.848180249" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.300700 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.331677 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.340757 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.348845 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.350596 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.353047 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.362448 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.369204 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.376986 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.538669 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2x2\" (UniqueName: \"kubernetes.io/projected/be5646ba-6f94-4628-85ef-5091fee066d5-kube-api-access-7r2x2\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.538723 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-config-data\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.538753 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.538773 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5646ba-6f94-4628-85ef-5091fee066d5-logs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.538843 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.538909 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-public-tls-certs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.640957 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-public-tls-certs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.641136 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r2x2\" (UniqueName: \"kubernetes.io/projected/be5646ba-6f94-4628-85ef-5091fee066d5-kube-api-access-7r2x2\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.641173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-config-data\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.641611 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.641643 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5646ba-6f94-4628-85ef-5091fee066d5-logs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.642095 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.642421 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5646ba-6f94-4628-85ef-5091fee066d5-logs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.645669 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.647759 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.651036 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-config-data\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.658883 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5646ba-6f94-4628-85ef-5091fee066d5-public-tls-certs\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.659243 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r2x2\" (UniqueName: \"kubernetes.io/projected/be5646ba-6f94-4628-85ef-5091fee066d5-kube-api-access-7r2x2\") pod \"nova-api-0\" (UID: \"be5646ba-6f94-4628-85ef-5091fee066d5\") " pod="openstack/nova-api-0" Oct 14 15:13:51 crc kubenswrapper[4860]: I1014 15:13:51.761866 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.096971 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.162616 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.206197 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edadf2e8-459f-4994-a1f8-a059cbdb46c6","Type":"ContainerStarted","Data":"5684c583c7f24aa5586a00093f4b9c0809d28df2062f67eac16ea50ea76f7ed0"} Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.206264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edadf2e8-459f-4994-a1f8-a059cbdb46c6","Type":"ContainerStarted","Data":"c896cc543bc84667f4505f1a35187c4ea35962699ca2e713acb683e35d9299fb"} Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.210501 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 15:13:52 crc kubenswrapper[4860]: W1014 15:13:52.219792 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe5646ba_6f94_4628_85ef_5091fee066d5.slice/crio-9843c46f52223207079d1700d8e3fed5e0d90216ecd29422214aa00158538113 WatchSource:0}: Error finding container 9843c46f52223207079d1700d8e3fed5e0d90216ecd29422214aa00158538113: Status 404 returned error can't find the container with id 9843c46f52223207079d1700d8e3fed5e0d90216ecd29422214aa00158538113 Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.231747 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.231728132 podStartE2EDuration="2.231728132s" podCreationTimestamp="2025-10-14 15:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:52.226155256 +0000 UTC m=+1493.812938735" watchObservedRunningTime="2025-10-14 15:13:52.231728132 +0000 UTC m=+1493.818511591" Oct 14 15:13:52 crc kubenswrapper[4860]: I1014 15:13:52.834471 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gkdc"] Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.081593 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a643628-4b73-4dfe-bb93-145dcf750ae6" path="/var/lib/kubelet/pods/6a643628-4b73-4dfe-bb93-145dcf750ae6/volumes" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.222928 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be5646ba-6f94-4628-85ef-5091fee066d5","Type":"ContainerStarted","Data":"01f5e9afda3c9f09b093b01f68441b299c563cbd9b320791b1bd70ae612df2de"} Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.222966 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be5646ba-6f94-4628-85ef-5091fee066d5","Type":"ContainerStarted","Data":"4f27862869af8ed98c15ae9c944453a47fd949b24a062fae9d7ef2bb047b031c"} Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.222974 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be5646ba-6f94-4628-85ef-5091fee066d5","Type":"ContainerStarted","Data":"9843c46f52223207079d1700d8e3fed5e0d90216ecd29422214aa00158538113"} Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.224189 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6gkdc" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="registry-server" containerID="cri-o://1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16" gracePeriod=2 Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.279451 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.279430229 podStartE2EDuration="2.279430229s" podCreationTimestamp="2025-10-14 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:13:53.273384892 +0000 UTC m=+1494.860168331" watchObservedRunningTime="2025-10-14 15:13:53.279430229 +0000 UTC m=+1494.866213688" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.736875 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.889549 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzj7d\" (UniqueName: \"kubernetes.io/projected/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-kube-api-access-fzj7d\") pod \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.889592 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-utilities\") pod \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.889710 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-catalog-content\") pod \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\" (UID: \"33caa1e1-70c8-4eb2-b3ee-2400962b4a11\") " Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.890414 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-utilities" (OuterVolumeSpecName: "utilities") pod "33caa1e1-70c8-4eb2-b3ee-2400962b4a11" (UID: "33caa1e1-70c8-4eb2-b3ee-2400962b4a11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.896900 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-kube-api-access-fzj7d" (OuterVolumeSpecName: "kube-api-access-fzj7d") pod "33caa1e1-70c8-4eb2-b3ee-2400962b4a11" (UID: "33caa1e1-70c8-4eb2-b3ee-2400962b4a11"). InnerVolumeSpecName "kube-api-access-fzj7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.943888 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33caa1e1-70c8-4eb2-b3ee-2400962b4a11" (UID: "33caa1e1-70c8-4eb2-b3ee-2400962b4a11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.993437 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.994427 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzj7d\" (UniqueName: \"kubernetes.io/projected/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-kube-api-access-fzj7d\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:53 crc kubenswrapper[4860]: I1014 15:13:53.994582 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33caa1e1-70c8-4eb2-b3ee-2400962b4a11-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.235909 4860 generic.go:334] "Generic (PLEG): container finished" podID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerID="1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16" exitCode=0 Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.235961 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gkdc" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.235980 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerDied","Data":"1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16"} Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.236044 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gkdc" event={"ID":"33caa1e1-70c8-4eb2-b3ee-2400962b4a11","Type":"ContainerDied","Data":"ebdbf19d9902861a063c5c0276e63dd62d6bb9ac06a2a34e83eb6579f23999a8"} Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.236065 4860 scope.go:117] "RemoveContainer" containerID="1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.260093 4860 scope.go:117] "RemoveContainer" containerID="c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.281133 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gkdc"] Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.290825 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6gkdc"] Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.294113 4860 scope.go:117] "RemoveContainer" containerID="000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.337659 4860 scope.go:117] "RemoveContainer" containerID="1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16" Oct 14 15:13:54 crc kubenswrapper[4860]: E1014 15:13:54.341470 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16\": container with ID starting with 1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16 not found: ID does not exist" containerID="1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.341520 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16"} err="failed to get container status \"1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16\": rpc error: code = NotFound desc = could not find container \"1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16\": container with ID starting with 1138cd1747655ca9ec607240870edb3834dee29f40b4b571f0c482bd6767fa16 not found: ID does not exist" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.341548 4860 scope.go:117] "RemoveContainer" containerID="c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248" Oct 14 15:13:54 crc kubenswrapper[4860]: E1014 15:13:54.342348 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248\": container with ID starting with c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248 not found: ID does not exist" containerID="c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.342384 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248"} err="failed to get container status \"c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248\": rpc error: code = NotFound desc = could not find container \"c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248\": container with ID starting with c27d1af20b44b8235541715814c8a7d9e4d5eb1755e1707a483361bf7a29a248 not found: ID does not exist" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.342406 4860 scope.go:117] "RemoveContainer" containerID="000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524" Oct 14 15:13:54 crc kubenswrapper[4860]: E1014 15:13:54.342982 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524\": container with ID starting with 000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524 not found: ID does not exist" containerID="000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.343113 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524"} err="failed to get container status \"000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524\": rpc error: code = NotFound desc = could not find container \"000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524\": container with ID starting with 000b1d155e3eb044547266051bcfd29a51bf18983a68f7da824e6cc57913a524 not found: ID does not exist" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.689474 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 15:13:54 crc kubenswrapper[4860]: I1014 15:13:54.689567 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 15:13:55 crc kubenswrapper[4860]: I1014 15:13:55.070909 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" path="/var/lib/kubelet/pods/33caa1e1-70c8-4eb2-b3ee-2400962b4a11/volumes" Oct 14 15:13:55 crc kubenswrapper[4860]: I1014 15:13:55.764740 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.245535 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.246349 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.246407 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.247258 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.247325 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" gracePeriod=600 Oct 14 15:13:59 crc kubenswrapper[4860]: E1014 15:13:59.377692 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.689290 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 15:13:59 crc kubenswrapper[4860]: I1014 15:13:59.689348 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.293092 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" exitCode=0 Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.293164 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642"} Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.294111 4860 scope.go:117] "RemoveContainer" containerID="8e8c816816ac6aa5296d7e14d541eea35fcda7f2a88ab8bc1a07386f6df3b2dd" Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.295020 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:14:00 crc kubenswrapper[4860]: E1014 15:14:00.295559 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.701270 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.702119 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.764907 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 15:14:00 crc kubenswrapper[4860]: I1014 15:14:00.797556 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 15:14:01 crc kubenswrapper[4860]: I1014 15:14:01.337111 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 15:14:01 crc kubenswrapper[4860]: I1014 15:14:01.763549 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:14:01 crc kubenswrapper[4860]: I1014 15:14:01.763604 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 15:14:02 crc kubenswrapper[4860]: I1014 15:14:02.778216 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="be5646ba-6f94-4628-85ef-5091fee066d5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:14:02 crc kubenswrapper[4860]: I1014 15:14:02.778235 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="be5646ba-6f94-4628-85ef-5091fee066d5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 15:14:05 crc kubenswrapper[4860]: I1014 15:14:05.359753 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 15:14:09 crc kubenswrapper[4860]: I1014 15:14:09.723504 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 15:14:09 crc kubenswrapper[4860]: I1014 15:14:09.729200 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 15:14:09 crc kubenswrapper[4860]: I1014 15:14:09.738265 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 15:14:10 crc kubenswrapper[4860]: I1014 15:14:10.485296 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 15:14:11 crc kubenswrapper[4860]: I1014 15:14:11.773788 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 15:14:11 crc kubenswrapper[4860]: I1014 15:14:11.774516 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 15:14:11 crc kubenswrapper[4860]: I1014 15:14:11.775868 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 15:14:11 crc kubenswrapper[4860]: I1014 15:14:11.783313 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 15:14:12 crc kubenswrapper[4860]: I1014 15:14:12.061944 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:14:12 crc kubenswrapper[4860]: E1014 15:14:12.062547 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:14:12 crc kubenswrapper[4860]: I1014 15:14:12.416407 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 15:14:12 crc kubenswrapper[4860]: I1014 15:14:12.437347 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 15:14:14 crc kubenswrapper[4860]: I1014 15:14:14.774695 4860 scope.go:117] "RemoveContainer" containerID="4e8439148aff48414919baa1772e64d247631d2f13c92e279003786c67f343ea" Oct 14 15:14:14 crc kubenswrapper[4860]: I1014 15:14:14.797507 4860 scope.go:117] "RemoveContainer" containerID="2a37aafc386a936a15dbb5e3a28bf4922075ea31d5d8c58daf41a1ba2d807025" Oct 14 15:14:21 crc kubenswrapper[4860]: I1014 15:14:21.460939 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:14:22 crc kubenswrapper[4860]: I1014 15:14:22.927209 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:14:25 crc kubenswrapper[4860]: I1014 15:14:25.068436 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:14:25 crc kubenswrapper[4860]: E1014 15:14:25.070461 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:14:26 crc kubenswrapper[4860]: I1014 15:14:26.014319 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="rabbitmq" containerID="cri-o://70591ac01a99688424b0ae9e6880f46c2db6588cc2fa664e4270a59c3f8df7ee" gracePeriod=604796 Oct 14 15:14:27 crc kubenswrapper[4860]: I1014 15:14:27.690449 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="rabbitmq" containerID="cri-o://05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f" gracePeriod=604796 Oct 14 15:14:30 crc kubenswrapper[4860]: I1014 15:14:30.111458 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Oct 14 15:14:30 crc kubenswrapper[4860]: I1014 15:14:30.577912 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.593588 4860 generic.go:334] "Generic (PLEG): container finished" podID="90824b73-8623-495c-8bed-fdc67bff987a" containerID="70591ac01a99688424b0ae9e6880f46c2db6588cc2fa664e4270a59c3f8df7ee" exitCode=0 Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.593673 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90824b73-8623-495c-8bed-fdc67bff987a","Type":"ContainerDied","Data":"70591ac01a99688424b0ae9e6880f46c2db6588cc2fa664e4270a59c3f8df7ee"} Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.594128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90824b73-8623-495c-8bed-fdc67bff987a","Type":"ContainerDied","Data":"006a89cadc8bd4f2664d15fae504fcb3d11d8daad22fcb0022c4b639ed6c6199"} Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.594145 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006a89cadc8bd4f2664d15fae504fcb3d11d8daad22fcb0022c4b639ed6c6199" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.596346 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775566 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-server-conf\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775672 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90824b73-8623-495c-8bed-fdc67bff987a-erlang-cookie-secret\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775723 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfc4p\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-kube-api-access-vfc4p\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775770 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-plugins\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775810 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90824b73-8623-495c-8bed-fdc67bff987a-pod-info\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775880 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775907 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-confd\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.775941 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-tls\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.776002 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-erlang-cookie\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.776060 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-config-data\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.776087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-plugins-conf\") pod \"90824b73-8623-495c-8bed-fdc67bff987a\" (UID: \"90824b73-8623-495c-8bed-fdc67bff987a\") " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.777911 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.780310 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.780857 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.787478 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/90824b73-8623-495c-8bed-fdc67bff987a-pod-info" (OuterVolumeSpecName: "pod-info") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.808187 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90824b73-8623-495c-8bed-fdc67bff987a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.808300 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-kube-api-access-vfc4p" (OuterVolumeSpecName: "kube-api-access-vfc4p") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "kube-api-access-vfc4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.808590 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.814821 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.839224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-config-data" (OuterVolumeSpecName: "config-data") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877752 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877786 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877799 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877808 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877816 4860 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877824 4860 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90824b73-8623-495c-8bed-fdc67bff987a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877833 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfc4p\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-kube-api-access-vfc4p\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877842 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.877849 4860 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90824b73-8623-495c-8bed-fdc67bff987a-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.894212 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-server-conf" (OuterVolumeSpecName: "server-conf") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.908687 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.965280 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "90824b73-8623-495c-8bed-fdc67bff987a" (UID: "90824b73-8623-495c-8bed-fdc67bff987a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.980081 4860 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90824b73-8623-495c-8bed-fdc67bff987a-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.980366 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:32 crc kubenswrapper[4860]: I1014 15:14:32.980478 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90824b73-8623-495c-8bed-fdc67bff987a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.601047 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.626726 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.634303 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.654829 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:14:33 crc kubenswrapper[4860]: E1014 15:14:33.655240 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="extract-content" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655279 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="extract-content" Oct 14 15:14:33 crc kubenswrapper[4860]: E1014 15:14:33.655304 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="setup-container" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655313 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="setup-container" Oct 14 15:14:33 crc kubenswrapper[4860]: E1014 15:14:33.655336 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="extract-utilities" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655356 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="extract-utilities" Oct 14 15:14:33 crc kubenswrapper[4860]: E1014 15:14:33.655376 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="registry-server" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655381 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="registry-server" Oct 14 15:14:33 crc kubenswrapper[4860]: E1014 15:14:33.655393 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="rabbitmq" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655398 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="rabbitmq" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655558 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="33caa1e1-70c8-4eb2-b3ee-2400962b4a11" containerName="registry-server" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.655579 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="90824b73-8623-495c-8bed-fdc67bff987a" containerName="rabbitmq" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.656542 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.658298 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.659420 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-w8xwd" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.659555 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.659676 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.659829 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.659975 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.666510 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.677528 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.795710 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kl6\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-kube-api-access-l9kl6\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.795790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.795857 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796111 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796358 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796404 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796440 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796693 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796855 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7bde387-0de9-44df-84cf-3db5e96019c9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.796966 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.797186 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7bde387-0de9-44df-84cf-3db5e96019c9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908131 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908185 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7bde387-0de9-44df-84cf-3db5e96019c9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908204 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7bde387-0de9-44df-84cf-3db5e96019c9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908266 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9kl6\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-kube-api-access-l9kl6\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908287 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908313 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908347 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908371 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908393 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908432 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908622 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.908740 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.909577 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.909643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.909942 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.911942 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7bde387-0de9-44df-84cf-3db5e96019c9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.920497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7bde387-0de9-44df-84cf-3db5e96019c9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.920807 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7bde387-0de9-44df-84cf-3db5e96019c9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.930244 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.938981 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.941185 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9kl6\" (UniqueName: \"kubernetes.io/projected/a7bde387-0de9-44df-84cf-3db5e96019c9-kube-api-access-l9kl6\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:33 crc kubenswrapper[4860]: I1014 15:14:33.963803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a7bde387-0de9-44df-84cf-3db5e96019c9\") " pod="openstack/rabbitmq-server-0" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.020800 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.343830 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.419658 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-server-conf\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420123 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420251 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-config-data\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kjfj\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-kube-api-access-6kjfj\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420337 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1afb1fa-9423-4ef6-a771-76c666ca1038-pod-info\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420368 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1afb1fa-9423-4ef6-a771-76c666ca1038-erlang-cookie-secret\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420428 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-tls\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420475 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-erlang-cookie\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420510 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-plugins\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420581 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-confd\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.420603 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-plugins-conf\") pod \"d1afb1fa-9423-4ef6-a771-76c666ca1038\" (UID: \"d1afb1fa-9423-4ef6-a771-76c666ca1038\") " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.438881 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1afb1fa-9423-4ef6-a771-76c666ca1038-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.441629 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.457786 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d1afb1fa-9423-4ef6-a771-76c666ca1038-pod-info" (OuterVolumeSpecName: "pod-info") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.458351 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.459792 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-kube-api-access-6kjfj" (OuterVolumeSpecName: "kube-api-access-6kjfj") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "kube-api-access-6kjfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.464818 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.464876 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.518919 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540471 4860 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1afb1fa-9423-4ef6-a771-76c666ca1038-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540505 4860 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1afb1fa-9423-4ef6-a771-76c666ca1038-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540516 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540566 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540598 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540607 4860 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540628 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.540637 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kjfj\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-kube-api-access-6kjfj\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.589494 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.590364 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-server-conf" (OuterVolumeSpecName: "server-conf") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.595482 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-config-data" (OuterVolumeSpecName: "config-data") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.612352 4860 generic.go:334] "Generic (PLEG): container finished" podID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerID="05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f" exitCode=0 Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.612403 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d1afb1fa-9423-4ef6-a771-76c666ca1038","Type":"ContainerDied","Data":"05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f"} Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.612433 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d1afb1fa-9423-4ef6-a771-76c666ca1038","Type":"ContainerDied","Data":"abe59e8723c49469ae04348fd61f230f0b1f62c256c599cf0c4d5af2e422a417"} Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.612451 4860 scope.go:117] "RemoveContainer" containerID="05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.612651 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.633603 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.643082 4860 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.643200 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.643214 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1afb1fa-9423-4ef6-a771-76c666ca1038-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.665520 4860 scope.go:117] "RemoveContainer" containerID="527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.685455 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d1afb1fa-9423-4ef6-a771-76c666ca1038" (UID: "d1afb1fa-9423-4ef6-a771-76c666ca1038"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.720310 4860 scope.go:117] "RemoveContainer" containerID="05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f" Oct 14 15:14:34 crc kubenswrapper[4860]: E1014 15:14:34.720775 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f\": container with ID starting with 05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f not found: ID does not exist" containerID="05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.720809 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f"} err="failed to get container status \"05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f\": rpc error: code = NotFound desc = could not find container \"05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f\": container with ID starting with 05178100a0d4a92372cb32e7d2c9ec1065ec0873b9781d7f70040e70d9b4780f not found: ID does not exist" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.720828 4860 scope.go:117] "RemoveContainer" containerID="527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a" Oct 14 15:14:34 crc kubenswrapper[4860]: E1014 15:14:34.721192 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a\": container with ID starting with 527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a not found: ID does not exist" containerID="527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.721234 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a"} err="failed to get container status \"527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a\": rpc error: code = NotFound desc = could not find container \"527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a\": container with ID starting with 527c63dd378764bb65f7b9b451d3afac5be5eb908404b20b8b3a7dc19f33d19a not found: ID does not exist" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.744486 4860 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1afb1fa-9423-4ef6-a771-76c666ca1038-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.948221 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.959631 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.986286 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:14:34 crc kubenswrapper[4860]: E1014 15:14:34.986740 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="setup-container" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.986758 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="setup-container" Oct 14 15:14:34 crc kubenswrapper[4860]: E1014 15:14:34.986771 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="rabbitmq" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.986776 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="rabbitmq" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.986986 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" containerName="rabbitmq" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.987941 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.991930 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.992393 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bm47p" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.992471 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.992702 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.992732 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.992882 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 15:14:34 crc kubenswrapper[4860]: I1014 15:14:34.993281 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.012396 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.085879 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90824b73-8623-495c-8bed-fdc67bff987a" path="/var/lib/kubelet/pods/90824b73-8623-495c-8bed-fdc67bff987a/volumes" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.087535 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1afb1fa-9423-4ef6-a771-76c666ca1038" path="/var/lib/kubelet/pods/d1afb1fa-9423-4ef6-a771-76c666ca1038/volumes" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156419 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156460 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b636d89a-e295-48f6-8679-c6c7b0f998cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156481 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156504 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156561 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86f4g\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-kube-api-access-86f4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156580 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156610 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b636d89a-e295-48f6-8679-c6c7b0f998cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156627 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156720 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.156739 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.258806 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.258912 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.258946 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b636d89a-e295-48f6-8679-c6c7b0f998cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.258969 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259014 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259090 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86f4g\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-kube-api-access-86f4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259120 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259156 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b636d89a-e295-48f6-8679-c6c7b0f998cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259179 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259219 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259271 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.259717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.260211 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.260820 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.261153 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.262438 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b636d89a-e295-48f6-8679-c6c7b0f998cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.262491 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.264771 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b636d89a-e295-48f6-8679-c6c7b0f998cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.264819 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.264880 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.265270 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b636d89a-e295-48f6-8679-c6c7b0f998cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.283180 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86f4g\" (UniqueName: \"kubernetes.io/projected/b636d89a-e295-48f6-8679-c6c7b0f998cf-kube-api-access-86f4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.312287 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b636d89a-e295-48f6-8679-c6c7b0f998cf\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.609602 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:14:35 crc kubenswrapper[4860]: I1014 15:14:35.622243 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7bde387-0de9-44df-84cf-3db5e96019c9","Type":"ContainerStarted","Data":"bf3ac7a2e07a6246c631102283c12bd472a8478bcb9f7f0f9dd6f50e24dc73bd"} Oct 14 15:14:36 crc kubenswrapper[4860]: I1014 15:14:36.061630 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:14:36 crc kubenswrapper[4860]: E1014 15:14:36.062108 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:14:36 crc kubenswrapper[4860]: I1014 15:14:36.112711 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 15:14:36 crc kubenswrapper[4860]: W1014 15:14:36.117514 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb636d89a_e295_48f6_8679_c6c7b0f998cf.slice/crio-b9d68dff9dd3c0853e0485691c585ff04d794cebf681b87dcac3707154026c8b WatchSource:0}: Error finding container b9d68dff9dd3c0853e0485691c585ff04d794cebf681b87dcac3707154026c8b: Status 404 returned error can't find the container with id b9d68dff9dd3c0853e0485691c585ff04d794cebf681b87dcac3707154026c8b Oct 14 15:14:36 crc kubenswrapper[4860]: I1014 15:14:36.634492 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7bde387-0de9-44df-84cf-3db5e96019c9","Type":"ContainerStarted","Data":"92ac1b5771da4db987cff2f7bef6d25909fd843415f9e2efff29bd74afb1bf81"} Oct 14 15:14:36 crc kubenswrapper[4860]: I1014 15:14:36.635538 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b636d89a-e295-48f6-8679-c6c7b0f998cf","Type":"ContainerStarted","Data":"b9d68dff9dd3c0853e0485691c585ff04d794cebf681b87dcac3707154026c8b"} Oct 14 15:14:37 crc kubenswrapper[4860]: I1014 15:14:37.645097 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b636d89a-e295-48f6-8679-c6c7b0f998cf","Type":"ContainerStarted","Data":"d9032e0c75abe4ea8efc3377820075d9fa6e981af113fa34666cc5aa6977eee3"} Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.028846 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-cjh2h"] Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.032460 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.034594 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.078287 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-cjh2h"] Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115145 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115193 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115231 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-config\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115250 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115361 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znds5\" (UniqueName: \"kubernetes.io/projected/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-kube-api-access-znds5\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115393 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.115410 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217191 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217304 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-config\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217342 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217399 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znds5\" (UniqueName: \"kubernetes.io/projected/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-kube-api-access-znds5\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217453 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.217488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.218125 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.218592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.218772 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.218997 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.219423 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.219434 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-config\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.240164 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znds5\" (UniqueName: \"kubernetes.io/projected/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-kube-api-access-znds5\") pod \"dnsmasq-dns-79bd4cc8c9-cjh2h\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.354956 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:38 crc kubenswrapper[4860]: I1014 15:14:38.877716 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-cjh2h"] Oct 14 15:14:38 crc kubenswrapper[4860]: W1014 15:14:38.896626 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4947e2b1_515c_4f17_b8f5_d7ad308ee1e5.slice/crio-37a74fefcd3738fca3e231f3b5e4a398934a9e3184a0a8fa2c6af504b07aa0f8 WatchSource:0}: Error finding container 37a74fefcd3738fca3e231f3b5e4a398934a9e3184a0a8fa2c6af504b07aa0f8: Status 404 returned error can't find the container with id 37a74fefcd3738fca3e231f3b5e4a398934a9e3184a0a8fa2c6af504b07aa0f8 Oct 14 15:14:39 crc kubenswrapper[4860]: I1014 15:14:39.665627 4860 generic.go:334] "Generic (PLEG): container finished" podID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerID="e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570" exitCode=0 Oct 14 15:14:39 crc kubenswrapper[4860]: I1014 15:14:39.665968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" event={"ID":"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5","Type":"ContainerDied","Data":"e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570"} Oct 14 15:14:39 crc kubenswrapper[4860]: I1014 15:14:39.666389 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" event={"ID":"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5","Type":"ContainerStarted","Data":"37a74fefcd3738fca3e231f3b5e4a398934a9e3184a0a8fa2c6af504b07aa0f8"} Oct 14 15:14:40 crc kubenswrapper[4860]: I1014 15:14:40.677560 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" event={"ID":"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5","Type":"ContainerStarted","Data":"5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2"} Oct 14 15:14:40 crc kubenswrapper[4860]: I1014 15:14:40.677869 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:40 crc kubenswrapper[4860]: I1014 15:14:40.705727 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" podStartSLOduration=3.7057062739999997 podStartE2EDuration="3.705706274s" podCreationTimestamp="2025-10-14 15:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:14:40.702062865 +0000 UTC m=+1542.288846324" watchObservedRunningTime="2025-10-14 15:14:40.705706274 +0000 UTC m=+1542.292489743" Oct 14 15:14:47 crc kubenswrapper[4860]: I1014 15:14:47.062126 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:14:47 crc kubenswrapper[4860]: E1014 15:14:47.062994 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.357111 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.466297 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7k8rg"] Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.466564 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" podUID="c3522c43-5736-44e0-8671-da94de73685a" containerName="dnsmasq-dns" containerID="cri-o://0357c1e02a8087ed3f3969c78b515489f855eeee95a158b3e7ccaf98be5220da" gracePeriod=10 Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.708302 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-bh2nm"] Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.710219 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.754048 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-bh2nm"] Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.787789 4860 generic.go:334] "Generic (PLEG): container finished" podID="c3522c43-5736-44e0-8671-da94de73685a" containerID="0357c1e02a8087ed3f3969c78b515489f855eeee95a158b3e7ccaf98be5220da" exitCode=0 Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.787834 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" event={"ID":"c3522c43-5736-44e0-8671-da94de73685a","Type":"ContainerDied","Data":"0357c1e02a8087ed3f3969c78b515489f855eeee95a158b3e7ccaf98be5220da"} Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824121 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824166 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824254 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824274 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824313 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-config\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824330 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.824352 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnnn\" (UniqueName: \"kubernetes.io/projected/2973f190-e42c-4031-9746-70704bafe957-kube-api-access-sdnnn\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926115 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926159 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926203 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-config\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926219 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926241 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdnnn\" (UniqueName: \"kubernetes.io/projected/2973f190-e42c-4031-9746-70704bafe957-kube-api-access-sdnnn\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926319 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.926346 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.927515 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.927545 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.928130 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.928459 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.928666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.929007 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2973f190-e42c-4031-9746-70704bafe957-config\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:48 crc kubenswrapper[4860]: I1014 15:14:48.946699 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdnnn\" (UniqueName: \"kubernetes.io/projected/2973f190-e42c-4031-9746-70704bafe957-kube-api-access-sdnnn\") pod \"dnsmasq-dns-6ff66b85ff-bh2nm\" (UID: \"2973f190-e42c-4031-9746-70704bafe957\") " pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.032841 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.038192 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.129024 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.129679 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-config\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.129747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm7zm\" (UniqueName: \"kubernetes.io/projected/c3522c43-5736-44e0-8671-da94de73685a-kube-api-access-tm7zm\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.129827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-sb\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.129862 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-swift-storage-0\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.129961 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-nb\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.159951 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3522c43-5736-44e0-8671-da94de73685a-kube-api-access-tm7zm" (OuterVolumeSpecName: "kube-api-access-tm7zm") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "kube-api-access-tm7zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.191740 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-config" (OuterVolumeSpecName: "config") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.210491 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.231858 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.232140 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc\") pod \"c3522c43-5736-44e0-8671-da94de73685a\" (UID: \"c3522c43-5736-44e0-8671-da94de73685a\") " Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.235056 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.235091 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.235101 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm7zm\" (UniqueName: \"kubernetes.io/projected/c3522c43-5736-44e0-8671-da94de73685a-kube-api-access-tm7zm\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:49 crc kubenswrapper[4860]: W1014 15:14:49.235243 4860 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c3522c43-5736-44e0-8671-da94de73685a/volumes/kubernetes.io~configmap/dns-svc Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.235256 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.245327 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.248891 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3522c43-5736-44e0-8671-da94de73685a" (UID: "c3522c43-5736-44e0-8671-da94de73685a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.337411 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.337467 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.337478 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3522c43-5736-44e0-8671-da94de73685a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:49 crc kubenswrapper[4860]: W1014 15:14:49.526336 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2973f190_e42c_4031_9746_70704bafe957.slice/crio-d4ac5de2629cb17e530986609f70c57fc6554a1a6f9888bccc9017d6be183120 WatchSource:0}: Error finding container d4ac5de2629cb17e530986609f70c57fc6554a1a6f9888bccc9017d6be183120: Status 404 returned error can't find the container with id d4ac5de2629cb17e530986609f70c57fc6554a1a6f9888bccc9017d6be183120 Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.527393 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-bh2nm"] Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.801062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" event={"ID":"2973f190-e42c-4031-9746-70704bafe957","Type":"ContainerStarted","Data":"d4ac5de2629cb17e530986609f70c57fc6554a1a6f9888bccc9017d6be183120"} Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.806769 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" event={"ID":"c3522c43-5736-44e0-8671-da94de73685a","Type":"ContainerDied","Data":"539c5220b8168ad5f835919e47ccaf48e069e96b089f25a9aed0eac92d0505ea"} Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.806818 4860 scope.go:117] "RemoveContainer" containerID="0357c1e02a8087ed3f3969c78b515489f855eeee95a158b3e7ccaf98be5220da" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.807007 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7k8rg" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.856606 4860 scope.go:117] "RemoveContainer" containerID="0fa828ca29976bb6ec8ca33332d417d275afd207c27eb838c1cbc8ca0c6d24fd" Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.869420 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7k8rg"] Oct 14 15:14:49 crc kubenswrapper[4860]: I1014 15:14:49.880849 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7k8rg"] Oct 14 15:14:50 crc kubenswrapper[4860]: I1014 15:14:50.820581 4860 generic.go:334] "Generic (PLEG): container finished" podID="2973f190-e42c-4031-9746-70704bafe957" containerID="6cad0c48ef19135a780eccc2cf600469aa09aabce195abfb05cd72ca36554326" exitCode=0 Oct 14 15:14:50 crc kubenswrapper[4860]: I1014 15:14:50.820793 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" event={"ID":"2973f190-e42c-4031-9746-70704bafe957","Type":"ContainerDied","Data":"6cad0c48ef19135a780eccc2cf600469aa09aabce195abfb05cd72ca36554326"} Oct 14 15:14:51 crc kubenswrapper[4860]: I1014 15:14:51.074813 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3522c43-5736-44e0-8671-da94de73685a" path="/var/lib/kubelet/pods/c3522c43-5736-44e0-8671-da94de73685a/volumes" Oct 14 15:14:51 crc kubenswrapper[4860]: I1014 15:14:51.834245 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" event={"ID":"2973f190-e42c-4031-9746-70704bafe957","Type":"ContainerStarted","Data":"943c5fcc72291f92e9136a1682875063be034bd3a8c37157a54f040ef4ecd0c7"} Oct 14 15:14:51 crc kubenswrapper[4860]: I1014 15:14:51.834963 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:51 crc kubenswrapper[4860]: I1014 15:14:51.854839 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" podStartSLOduration=3.854818019 podStartE2EDuration="3.854818019s" podCreationTimestamp="2025-10-14 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:14:51.854419479 +0000 UTC m=+1553.441203008" watchObservedRunningTime="2025-10-14 15:14:51.854818019 +0000 UTC m=+1553.441601468" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.040225 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff66b85ff-bh2nm" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.156075 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-cjh2h"] Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.156300 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerName="dnsmasq-dns" containerID="cri-o://5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2" gracePeriod=10 Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.612122 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.741576 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-nb\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.741679 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znds5\" (UniqueName: \"kubernetes.io/projected/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-kube-api-access-znds5\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.742541 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-config\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.742593 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-openstack-edpm-ipam\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.742688 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-swift-storage-0\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.742760 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-sb\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.742786 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-svc\") pod \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\" (UID: \"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5\") " Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.758348 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-kube-api-access-znds5" (OuterVolumeSpecName: "kube-api-access-znds5") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "kube-api-access-znds5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.801511 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.815491 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-config" (OuterVolumeSpecName: "config") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.829847 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.838224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.838755 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.845562 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znds5\" (UniqueName: \"kubernetes.io/projected/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-kube-api-access-znds5\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.845589 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-config\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.845598 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.845609 4860 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.845618 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.845627 4860 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.850000 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" (UID: "4947e2b1-515c-4f17-b8f5-d7ad308ee1e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.914745 4860 generic.go:334] "Generic (PLEG): container finished" podID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerID="5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2" exitCode=0 Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.914786 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" event={"ID":"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5","Type":"ContainerDied","Data":"5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2"} Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.914804 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.914819 4860 scope.go:117] "RemoveContainer" containerID="5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.914809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-cjh2h" event={"ID":"4947e2b1-515c-4f17-b8f5-d7ad308ee1e5","Type":"ContainerDied","Data":"37a74fefcd3738fca3e231f3b5e4a398934a9e3184a0a8fa2c6af504b07aa0f8"} Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.947510 4860 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.955737 4860 scope.go:117] "RemoveContainer" containerID="e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.958694 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-cjh2h"] Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.965963 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-cjh2h"] Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.975559 4860 scope.go:117] "RemoveContainer" containerID="5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2" Oct 14 15:14:59 crc kubenswrapper[4860]: E1014 15:14:59.979513 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2\": container with ID starting with 5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2 not found: ID does not exist" containerID="5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.982112 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2"} err="failed to get container status \"5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2\": rpc error: code = NotFound desc = could not find container \"5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2\": container with ID starting with 5161743e84c085a9e16e81d003f5e041d5901103a2ee2c86b08c8f15b46decb2 not found: ID does not exist" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.982173 4860 scope.go:117] "RemoveContainer" containerID="e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570" Oct 14 15:14:59 crc kubenswrapper[4860]: E1014 15:14:59.982583 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570\": container with ID starting with e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570 not found: ID does not exist" containerID="e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570" Oct 14 15:14:59 crc kubenswrapper[4860]: I1014 15:14:59.982613 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570"} err="failed to get container status \"e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570\": rpc error: code = NotFound desc = could not find container \"e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570\": container with ID starting with e8f1a54a236fba11675e575194adb658a44ad7f3121f7909ff29415931d3b570 not found: ID does not exist" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.144392 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s"] Oct 14 15:15:00 crc kubenswrapper[4860]: E1014 15:15:00.144800 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3522c43-5736-44e0-8671-da94de73685a" containerName="dnsmasq-dns" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.144814 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3522c43-5736-44e0-8671-da94de73685a" containerName="dnsmasq-dns" Oct 14 15:15:00 crc kubenswrapper[4860]: E1014 15:15:00.144835 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3522c43-5736-44e0-8671-da94de73685a" containerName="init" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.144841 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3522c43-5736-44e0-8671-da94de73685a" containerName="init" Oct 14 15:15:00 crc kubenswrapper[4860]: E1014 15:15:00.144867 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerName="dnsmasq-dns" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.144873 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerName="dnsmasq-dns" Oct 14 15:15:00 crc kubenswrapper[4860]: E1014 15:15:00.144890 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerName="init" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.144896 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerName="init" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.145079 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" containerName="dnsmasq-dns" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.145095 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3522c43-5736-44e0-8671-da94de73685a" containerName="dnsmasq-dns" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.145748 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.153758 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.154486 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.158610 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s"] Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.257510 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eba36c59-e61d-461d-b62a-90faf793dff6-secret-volume\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.257677 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eba36c59-e61d-461d-b62a-90faf793dff6-config-volume\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.257720 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6krks\" (UniqueName: \"kubernetes.io/projected/eba36c59-e61d-461d-b62a-90faf793dff6-kube-api-access-6krks\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.359593 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eba36c59-e61d-461d-b62a-90faf793dff6-config-volume\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.359648 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6krks\" (UniqueName: \"kubernetes.io/projected/eba36c59-e61d-461d-b62a-90faf793dff6-kube-api-access-6krks\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.359716 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eba36c59-e61d-461d-b62a-90faf793dff6-secret-volume\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.360886 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eba36c59-e61d-461d-b62a-90faf793dff6-config-volume\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.381122 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eba36c59-e61d-461d-b62a-90faf793dff6-secret-volume\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.384979 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6krks\" (UniqueName: \"kubernetes.io/projected/eba36c59-e61d-461d-b62a-90faf793dff6-kube-api-access-6krks\") pod \"collect-profiles-29340915-cq49s\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.529721 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:00 crc kubenswrapper[4860]: I1014 15:15:00.996613 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s"] Oct 14 15:15:01 crc kubenswrapper[4860]: W1014 15:15:01.006774 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba36c59_e61d_461d_b62a_90faf793dff6.slice/crio-6fa3ee3bf5c87957801f6733dbe7159a393cbebb22a2051b7edf50b852294720 WatchSource:0}: Error finding container 6fa3ee3bf5c87957801f6733dbe7159a393cbebb22a2051b7edf50b852294720: Status 404 returned error can't find the container with id 6fa3ee3bf5c87957801f6733dbe7159a393cbebb22a2051b7edf50b852294720 Oct 14 15:15:01 crc kubenswrapper[4860]: I1014 15:15:01.075473 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4947e2b1-515c-4f17-b8f5-d7ad308ee1e5" path="/var/lib/kubelet/pods/4947e2b1-515c-4f17-b8f5-d7ad308ee1e5/volumes" Oct 14 15:15:01 crc kubenswrapper[4860]: I1014 15:15:01.950266 4860 generic.go:334] "Generic (PLEG): container finished" podID="eba36c59-e61d-461d-b62a-90faf793dff6" containerID="4027bc952008939a6a6ddb3b33f74539e9b75b742b9d75639ed7b72170dc7781" exitCode=0 Oct 14 15:15:01 crc kubenswrapper[4860]: I1014 15:15:01.950638 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" event={"ID":"eba36c59-e61d-461d-b62a-90faf793dff6","Type":"ContainerDied","Data":"4027bc952008939a6a6ddb3b33f74539e9b75b742b9d75639ed7b72170dc7781"} Oct 14 15:15:01 crc kubenswrapper[4860]: I1014 15:15:01.950671 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" event={"ID":"eba36c59-e61d-461d-b62a-90faf793dff6","Type":"ContainerStarted","Data":"6fa3ee3bf5c87957801f6733dbe7159a393cbebb22a2051b7edf50b852294720"} Oct 14 15:15:02 crc kubenswrapper[4860]: I1014 15:15:02.062065 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:15:02 crc kubenswrapper[4860]: E1014 15:15:02.062607 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.301807 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.422180 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6krks\" (UniqueName: \"kubernetes.io/projected/eba36c59-e61d-461d-b62a-90faf793dff6-kube-api-access-6krks\") pod \"eba36c59-e61d-461d-b62a-90faf793dff6\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.422247 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eba36c59-e61d-461d-b62a-90faf793dff6-secret-volume\") pod \"eba36c59-e61d-461d-b62a-90faf793dff6\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.422295 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eba36c59-e61d-461d-b62a-90faf793dff6-config-volume\") pod \"eba36c59-e61d-461d-b62a-90faf793dff6\" (UID: \"eba36c59-e61d-461d-b62a-90faf793dff6\") " Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.423765 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba36c59-e61d-461d-b62a-90faf793dff6-config-volume" (OuterVolumeSpecName: "config-volume") pod "eba36c59-e61d-461d-b62a-90faf793dff6" (UID: "eba36c59-e61d-461d-b62a-90faf793dff6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.430014 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba36c59-e61d-461d-b62a-90faf793dff6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eba36c59-e61d-461d-b62a-90faf793dff6" (UID: "eba36c59-e61d-461d-b62a-90faf793dff6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.430263 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba36c59-e61d-461d-b62a-90faf793dff6-kube-api-access-6krks" (OuterVolumeSpecName: "kube-api-access-6krks") pod "eba36c59-e61d-461d-b62a-90faf793dff6" (UID: "eba36c59-e61d-461d-b62a-90faf793dff6"). InnerVolumeSpecName "kube-api-access-6krks". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.524273 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6krks\" (UniqueName: \"kubernetes.io/projected/eba36c59-e61d-461d-b62a-90faf793dff6-kube-api-access-6krks\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.524305 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eba36c59-e61d-461d-b62a-90faf793dff6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.524314 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eba36c59-e61d-461d-b62a-90faf793dff6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.968556 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" event={"ID":"eba36c59-e61d-461d-b62a-90faf793dff6","Type":"ContainerDied","Data":"6fa3ee3bf5c87957801f6733dbe7159a393cbebb22a2051b7edf50b852294720"} Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.968612 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa3ee3bf5c87957801f6733dbe7159a393cbebb22a2051b7edf50b852294720" Oct 14 15:15:03 crc kubenswrapper[4860]: I1014 15:15:03.968667 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s" Oct 14 15:15:09 crc kubenswrapper[4860]: I1014 15:15:09.016489 4860 generic.go:334] "Generic (PLEG): container finished" podID="a7bde387-0de9-44df-84cf-3db5e96019c9" containerID="92ac1b5771da4db987cff2f7bef6d25909fd843415f9e2efff29bd74afb1bf81" exitCode=0 Oct 14 15:15:09 crc kubenswrapper[4860]: I1014 15:15:09.016595 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7bde387-0de9-44df-84cf-3db5e96019c9","Type":"ContainerDied","Data":"92ac1b5771da4db987cff2f7bef6d25909fd843415f9e2efff29bd74afb1bf81"} Oct 14 15:15:10 crc kubenswrapper[4860]: I1014 15:15:10.026234 4860 generic.go:334] "Generic (PLEG): container finished" podID="b636d89a-e295-48f6-8679-c6c7b0f998cf" containerID="d9032e0c75abe4ea8efc3377820075d9fa6e981af113fa34666cc5aa6977eee3" exitCode=0 Oct 14 15:15:10 crc kubenswrapper[4860]: I1014 15:15:10.026305 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b636d89a-e295-48f6-8679-c6c7b0f998cf","Type":"ContainerDied","Data":"d9032e0c75abe4ea8efc3377820075d9fa6e981af113fa34666cc5aa6977eee3"} Oct 14 15:15:10 crc kubenswrapper[4860]: I1014 15:15:10.030048 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7bde387-0de9-44df-84cf-3db5e96019c9","Type":"ContainerStarted","Data":"c9b25f039ee962c706ab3abc5f8f1398ad18914d69f8a6480f6762a1b874bac7"} Oct 14 15:15:10 crc kubenswrapper[4860]: I1014 15:15:10.030238 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 15:15:10 crc kubenswrapper[4860]: I1014 15:15:10.133168 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.133145925 podStartE2EDuration="37.133145925s" podCreationTimestamp="2025-10-14 15:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:15:10.106713831 +0000 UTC m=+1571.693497280" watchObservedRunningTime="2025-10-14 15:15:10.133145925 +0000 UTC m=+1571.719929374" Oct 14 15:15:11 crc kubenswrapper[4860]: I1014 15:15:11.041011 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b636d89a-e295-48f6-8679-c6c7b0f998cf","Type":"ContainerStarted","Data":"cdf8ee0d19c8d44a380ab29f070f25e855c0e4c49423413705bf4191ec6a3d79"} Oct 14 15:15:11 crc kubenswrapper[4860]: I1014 15:15:11.042239 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:15:11 crc kubenswrapper[4860]: I1014 15:15:11.072492 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.072470571 podStartE2EDuration="37.072470571s" podCreationTimestamp="2025-10-14 15:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 15:15:11.059637268 +0000 UTC m=+1572.646420737" watchObservedRunningTime="2025-10-14 15:15:11.072470571 +0000 UTC m=+1572.659254020" Oct 14 15:15:14 crc kubenswrapper[4860]: I1014 15:15:14.061701 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:15:14 crc kubenswrapper[4860]: E1014 15:15:14.062384 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:15:15 crc kubenswrapper[4860]: I1014 15:15:15.055399 4860 scope.go:117] "RemoveContainer" containerID="70591ac01a99688424b0ae9e6880f46c2db6588cc2fa664e4270a59c3f8df7ee" Oct 14 15:15:15 crc kubenswrapper[4860]: I1014 15:15:15.089510 4860 scope.go:117] "RemoveContainer" containerID="7edec3391beba6df09d7c2f4269589118180147c8d551fdd5fca8075f23ccbba" Oct 14 15:15:15 crc kubenswrapper[4860]: I1014 15:15:15.113526 4860 scope.go:117] "RemoveContainer" containerID="edaaf103a300e98b5662cbf9cb71f3bbbc54d1cf253d9dff60f253a727961e2b" Oct 14 15:15:15 crc kubenswrapper[4860]: I1014 15:15:15.174856 4860 scope.go:117] "RemoveContainer" containerID="d1eaf723ebba156258c6570387c8d3c1cbda0f25874c523ade080f466724e5a7" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.326496 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk"] Oct 14 15:15:18 crc kubenswrapper[4860]: E1014 15:15:18.327296 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba36c59-e61d-461d-b62a-90faf793dff6" containerName="collect-profiles" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.327310 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba36c59-e61d-461d-b62a-90faf793dff6" containerName="collect-profiles" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.327505 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba36c59-e61d-461d-b62a-90faf793dff6" containerName="collect-profiles" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.328231 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.331332 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.331661 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.332132 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.332309 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.393932 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk"] Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.519495 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.519593 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.519631 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm52\" (UniqueName: \"kubernetes.io/projected/442b40ad-4a75-4690-ab2a-a63194e46aac-kube-api-access-btm52\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.519655 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.621216 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.621339 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.621405 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.621441 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm52\" (UniqueName: \"kubernetes.io/projected/442b40ad-4a75-4690-ab2a-a63194e46aac-kube-api-access-btm52\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.626524 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.627234 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.635866 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.645503 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm52\" (UniqueName: \"kubernetes.io/projected/442b40ad-4a75-4690-ab2a-a63194e46aac-kube-api-access-btm52\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:18 crc kubenswrapper[4860]: I1014 15:15:18.647633 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:19 crc kubenswrapper[4860]: I1014 15:15:19.579729 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk"] Oct 14 15:15:20 crc kubenswrapper[4860]: I1014 15:15:20.120327 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" event={"ID":"442b40ad-4a75-4690-ab2a-a63194e46aac","Type":"ContainerStarted","Data":"e45de8ed78fc295f7e7c17805add90d799e249d340b43a81fe3c447086901532"} Oct 14 15:15:24 crc kubenswrapper[4860]: I1014 15:15:24.024405 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 15:15:25 crc kubenswrapper[4860]: I1014 15:15:25.061842 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:15:25 crc kubenswrapper[4860]: E1014 15:15:25.062096 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:15:25 crc kubenswrapper[4860]: I1014 15:15:25.615399 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 15:15:34 crc kubenswrapper[4860]: I1014 15:15:34.266586 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" event={"ID":"442b40ad-4a75-4690-ab2a-a63194e46aac","Type":"ContainerStarted","Data":"ecd817398a2b767d0beb9b689716da86015777565696ad40da08c704e22129a2"} Oct 14 15:15:34 crc kubenswrapper[4860]: I1014 15:15:34.286781 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" podStartSLOduration=2.111962245 podStartE2EDuration="16.286762689s" podCreationTimestamp="2025-10-14 15:15:18 +0000 UTC" firstStartedPulling="2025-10-14 15:15:19.58087756 +0000 UTC m=+1581.167661009" lastFinishedPulling="2025-10-14 15:15:33.755678004 +0000 UTC m=+1595.342461453" observedRunningTime="2025-10-14 15:15:34.282665429 +0000 UTC m=+1595.869448888" watchObservedRunningTime="2025-10-14 15:15:34.286762689 +0000 UTC m=+1595.873546138" Oct 14 15:15:38 crc kubenswrapper[4860]: I1014 15:15:38.061590 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:15:38 crc kubenswrapper[4860]: E1014 15:15:38.061945 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:15:49 crc kubenswrapper[4860]: I1014 15:15:49.401121 4860 generic.go:334] "Generic (PLEG): container finished" podID="442b40ad-4a75-4690-ab2a-a63194e46aac" containerID="ecd817398a2b767d0beb9b689716da86015777565696ad40da08c704e22129a2" exitCode=0 Oct 14 15:15:49 crc kubenswrapper[4860]: I1014 15:15:49.401225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" event={"ID":"442b40ad-4a75-4690-ab2a-a63194e46aac","Type":"ContainerDied","Data":"ecd817398a2b767d0beb9b689716da86015777565696ad40da08c704e22129a2"} Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.812109 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.943880 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-inventory\") pod \"442b40ad-4a75-4690-ab2a-a63194e46aac\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.943987 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btm52\" (UniqueName: \"kubernetes.io/projected/442b40ad-4a75-4690-ab2a-a63194e46aac-kube-api-access-btm52\") pod \"442b40ad-4a75-4690-ab2a-a63194e46aac\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.944113 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-ssh-key\") pod \"442b40ad-4a75-4690-ab2a-a63194e46aac\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.944154 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-repo-setup-combined-ca-bundle\") pod \"442b40ad-4a75-4690-ab2a-a63194e46aac\" (UID: \"442b40ad-4a75-4690-ab2a-a63194e46aac\") " Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.949537 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442b40ad-4a75-4690-ab2a-a63194e46aac-kube-api-access-btm52" (OuterVolumeSpecName: "kube-api-access-btm52") pod "442b40ad-4a75-4690-ab2a-a63194e46aac" (UID: "442b40ad-4a75-4690-ab2a-a63194e46aac"). InnerVolumeSpecName "kube-api-access-btm52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.950521 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "442b40ad-4a75-4690-ab2a-a63194e46aac" (UID: "442b40ad-4a75-4690-ab2a-a63194e46aac"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.970018 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "442b40ad-4a75-4690-ab2a-a63194e46aac" (UID: "442b40ad-4a75-4690-ab2a-a63194e46aac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:15:50 crc kubenswrapper[4860]: I1014 15:15:50.970995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-inventory" (OuterVolumeSpecName: "inventory") pod "442b40ad-4a75-4690-ab2a-a63194e46aac" (UID: "442b40ad-4a75-4690-ab2a-a63194e46aac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.045857 4860 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.045894 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.045910 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btm52\" (UniqueName: \"kubernetes.io/projected/442b40ad-4a75-4690-ab2a-a63194e46aac-kube-api-access-btm52\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.045921 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/442b40ad-4a75-4690-ab2a-a63194e46aac-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.418501 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" event={"ID":"442b40ad-4a75-4690-ab2a-a63194e46aac","Type":"ContainerDied","Data":"e45de8ed78fc295f7e7c17805add90d799e249d340b43a81fe3c447086901532"} Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.418862 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45de8ed78fc295f7e7c17805add90d799e249d340b43a81fe3c447086901532" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.418612 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.524682 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns"] Oct 14 15:15:51 crc kubenswrapper[4860]: E1014 15:15:51.525305 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b40ad-4a75-4690-ab2a-a63194e46aac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.525334 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b40ad-4a75-4690-ab2a-a63194e46aac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.525729 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b40ad-4a75-4690-ab2a-a63194e46aac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.526506 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.532270 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.532270 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.532494 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.532628 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.540927 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns"] Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.671790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.671929 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b656p\" (UniqueName: \"kubernetes.io/projected/a935dc27-6373-4538-8676-b2532a79575c-kube-api-access-b656p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.671955 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.773815 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.773946 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b656p\" (UniqueName: \"kubernetes.io/projected/a935dc27-6373-4538-8676-b2532a79575c-kube-api-access-b656p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.773971 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.778266 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.782613 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.794836 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b656p\" (UniqueName: \"kubernetes.io/projected/a935dc27-6373-4538-8676-b2532a79575c-kube-api-access-b656p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c77ns\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:51 crc kubenswrapper[4860]: I1014 15:15:51.847620 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:52 crc kubenswrapper[4860]: I1014 15:15:52.342895 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns"] Oct 14 15:15:52 crc kubenswrapper[4860]: I1014 15:15:52.428725 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" event={"ID":"a935dc27-6373-4538-8676-b2532a79575c","Type":"ContainerStarted","Data":"8193712b579fc27f6c87ee275223d1eb1ec4dd4608367dcc652ad6eb2b03c633"} Oct 14 15:15:53 crc kubenswrapper[4860]: I1014 15:15:53.061514 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:15:53 crc kubenswrapper[4860]: E1014 15:15:53.062190 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:15:53 crc kubenswrapper[4860]: I1014 15:15:53.442222 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" event={"ID":"a935dc27-6373-4538-8676-b2532a79575c","Type":"ContainerStarted","Data":"d72d311f29becc0aad1c3481ed43b94007e6c617c167dde8128df0605892e136"} Oct 14 15:15:53 crc kubenswrapper[4860]: I1014 15:15:53.460589 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" podStartSLOduration=2.243027629 podStartE2EDuration="2.460569532s" podCreationTimestamp="2025-10-14 15:15:51 +0000 UTC" firstStartedPulling="2025-10-14 15:15:52.338741138 +0000 UTC m=+1613.925524577" lastFinishedPulling="2025-10-14 15:15:52.556283041 +0000 UTC m=+1614.143066480" observedRunningTime="2025-10-14 15:15:53.456318789 +0000 UTC m=+1615.043102248" watchObservedRunningTime="2025-10-14 15:15:53.460569532 +0000 UTC m=+1615.047352981" Oct 14 15:15:55 crc kubenswrapper[4860]: I1014 15:15:55.461820 4860 generic.go:334] "Generic (PLEG): container finished" podID="a935dc27-6373-4538-8676-b2532a79575c" containerID="d72d311f29becc0aad1c3481ed43b94007e6c617c167dde8128df0605892e136" exitCode=0 Oct 14 15:15:55 crc kubenswrapper[4860]: I1014 15:15:55.461871 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" event={"ID":"a935dc27-6373-4538-8676-b2532a79575c","Type":"ContainerDied","Data":"d72d311f29becc0aad1c3481ed43b94007e6c617c167dde8128df0605892e136"} Oct 14 15:15:56 crc kubenswrapper[4860]: I1014 15:15:56.835764 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:56 crc kubenswrapper[4860]: I1014 15:15:56.977550 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-inventory\") pod \"a935dc27-6373-4538-8676-b2532a79575c\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " Oct 14 15:15:56 crc kubenswrapper[4860]: I1014 15:15:56.977604 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-ssh-key\") pod \"a935dc27-6373-4538-8676-b2532a79575c\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " Oct 14 15:15:56 crc kubenswrapper[4860]: I1014 15:15:56.977677 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b656p\" (UniqueName: \"kubernetes.io/projected/a935dc27-6373-4538-8676-b2532a79575c-kube-api-access-b656p\") pod \"a935dc27-6373-4538-8676-b2532a79575c\" (UID: \"a935dc27-6373-4538-8676-b2532a79575c\") " Oct 14 15:15:56 crc kubenswrapper[4860]: I1014 15:15:56.987633 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a935dc27-6373-4538-8676-b2532a79575c-kube-api-access-b656p" (OuterVolumeSpecName: "kube-api-access-b656p") pod "a935dc27-6373-4538-8676-b2532a79575c" (UID: "a935dc27-6373-4538-8676-b2532a79575c"). InnerVolumeSpecName "kube-api-access-b656p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.009224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a935dc27-6373-4538-8676-b2532a79575c" (UID: "a935dc27-6373-4538-8676-b2532a79575c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.016382 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-inventory" (OuterVolumeSpecName: "inventory") pod "a935dc27-6373-4538-8676-b2532a79575c" (UID: "a935dc27-6373-4538-8676-b2532a79575c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.079759 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b656p\" (UniqueName: \"kubernetes.io/projected/a935dc27-6373-4538-8676-b2532a79575c-kube-api-access-b656p\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.079798 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.079807 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a935dc27-6373-4538-8676-b2532a79575c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.482478 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" event={"ID":"a935dc27-6373-4538-8676-b2532a79575c","Type":"ContainerDied","Data":"8193712b579fc27f6c87ee275223d1eb1ec4dd4608367dcc652ad6eb2b03c633"} Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.482524 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8193712b579fc27f6c87ee275223d1eb1ec4dd4608367dcc652ad6eb2b03c633" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.482582 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c77ns" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.551555 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j"] Oct 14 15:15:57 crc kubenswrapper[4860]: E1014 15:15:57.552020 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a935dc27-6373-4538-8676-b2532a79575c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.552064 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a935dc27-6373-4538-8676-b2532a79575c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.552366 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a935dc27-6373-4538-8676-b2532a79575c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.553192 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.556387 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.556686 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.557772 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.562665 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.572168 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j"] Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.692472 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.692582 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.692664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.692718 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmj6\" (UniqueName: \"kubernetes.io/projected/18e270ba-e48c-4f9e-bc6a-8269b31f5698-kube-api-access-cmmj6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.794381 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmj6\" (UniqueName: \"kubernetes.io/projected/18e270ba-e48c-4f9e-bc6a-8269b31f5698-kube-api-access-cmmj6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.794487 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.794569 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.794616 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.798404 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.798906 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.804549 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.812011 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmj6\" (UniqueName: \"kubernetes.io/projected/18e270ba-e48c-4f9e-bc6a-8269b31f5698-kube-api-access-cmmj6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:57 crc kubenswrapper[4860]: I1014 15:15:57.925578 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:15:58 crc kubenswrapper[4860]: W1014 15:15:58.429225 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18e270ba_e48c_4f9e_bc6a_8269b31f5698.slice/crio-22be694512b686ee5d3ea4aa16c32613a39be30ba02e81478c4da24513815026 WatchSource:0}: Error finding container 22be694512b686ee5d3ea4aa16c32613a39be30ba02e81478c4da24513815026: Status 404 returned error can't find the container with id 22be694512b686ee5d3ea4aa16c32613a39be30ba02e81478c4da24513815026 Oct 14 15:15:58 crc kubenswrapper[4860]: I1014 15:15:58.430092 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j"] Oct 14 15:15:58 crc kubenswrapper[4860]: I1014 15:15:58.491639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" event={"ID":"18e270ba-e48c-4f9e-bc6a-8269b31f5698","Type":"ContainerStarted","Data":"22be694512b686ee5d3ea4aa16c32613a39be30ba02e81478c4da24513815026"} Oct 14 15:15:59 crc kubenswrapper[4860]: I1014 15:15:59.504461 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" event={"ID":"18e270ba-e48c-4f9e-bc6a-8269b31f5698","Type":"ContainerStarted","Data":"88906c26996fbfd83fb63b71bfd5f80bd119c19fbd259d8fe62d44418a663d3a"} Oct 14 15:15:59 crc kubenswrapper[4860]: I1014 15:15:59.528503 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" podStartSLOduration=2.328168762 podStartE2EDuration="2.528486165s" podCreationTimestamp="2025-10-14 15:15:57 +0000 UTC" firstStartedPulling="2025-10-14 15:15:58.431336433 +0000 UTC m=+1620.018119882" lastFinishedPulling="2025-10-14 15:15:58.631653836 +0000 UTC m=+1620.218437285" observedRunningTime="2025-10-14 15:15:59.522250174 +0000 UTC m=+1621.109033643" watchObservedRunningTime="2025-10-14 15:15:59.528486165 +0000 UTC m=+1621.115269614" Oct 14 15:16:08 crc kubenswrapper[4860]: I1014 15:16:08.077800 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:16:08 crc kubenswrapper[4860]: E1014 15:16:08.078853 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:16:15 crc kubenswrapper[4860]: I1014 15:16:15.300144 4860 scope.go:117] "RemoveContainer" containerID="88a76723e02d8de3fc034bee165c642e009b34660ef7316fa335fab79b9b9a10" Oct 14 15:16:20 crc kubenswrapper[4860]: I1014 15:16:20.062070 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:16:20 crc kubenswrapper[4860]: E1014 15:16:20.062940 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:16:34 crc kubenswrapper[4860]: I1014 15:16:34.061256 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:16:34 crc kubenswrapper[4860]: E1014 15:16:34.061987 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:16:47 crc kubenswrapper[4860]: I1014 15:16:47.063301 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:16:47 crc kubenswrapper[4860]: E1014 15:16:47.064135 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:17:00 crc kubenswrapper[4860]: I1014 15:17:00.061280 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:17:00 crc kubenswrapper[4860]: E1014 15:17:00.062092 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:17:14 crc kubenswrapper[4860]: I1014 15:17:14.062930 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:17:14 crc kubenswrapper[4860]: E1014 15:17:14.063601 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:17:15 crc kubenswrapper[4860]: I1014 15:17:15.370488 4860 scope.go:117] "RemoveContainer" containerID="2f07f5e27542f3316566d054379dc958bbe6c97339b6bf19e8cb2419a843fb87" Oct 14 15:17:15 crc kubenswrapper[4860]: I1014 15:17:15.454729 4860 scope.go:117] "RemoveContainer" containerID="d21e4b78bd3da16aadb56a4eff731c94fc532f748a6ad2a94b4c62f73d697c02" Oct 14 15:17:29 crc kubenswrapper[4860]: I1014 15:17:29.071501 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:17:29 crc kubenswrapper[4860]: E1014 15:17:29.072388 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:17:41 crc kubenswrapper[4860]: I1014 15:17:41.062219 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:17:41 crc kubenswrapper[4860]: E1014 15:17:41.063002 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:17:53 crc kubenswrapper[4860]: I1014 15:17:53.062836 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:17:53 crc kubenswrapper[4860]: E1014 15:17:53.063967 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:18:07 crc kubenswrapper[4860]: I1014 15:18:07.063696 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:18:07 crc kubenswrapper[4860]: E1014 15:18:07.064396 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:18:19 crc kubenswrapper[4860]: I1014 15:18:19.068840 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:18:19 crc kubenswrapper[4860]: E1014 15:18:19.069603 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:18:34 crc kubenswrapper[4860]: I1014 15:18:34.062413 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:18:34 crc kubenswrapper[4860]: E1014 15:18:34.063198 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:18:37 crc kubenswrapper[4860]: I1014 15:18:37.046492 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7n45x"] Oct 14 15:18:37 crc kubenswrapper[4860]: I1014 15:18:37.071749 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7n45x"] Oct 14 15:18:38 crc kubenswrapper[4860]: I1014 15:18:38.034607 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dp25v"] Oct 14 15:18:38 crc kubenswrapper[4860]: I1014 15:18:38.044176 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-258hf"] Oct 14 15:18:38 crc kubenswrapper[4860]: I1014 15:18:38.052879 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-258hf"] Oct 14 15:18:38 crc kubenswrapper[4860]: I1014 15:18:38.061210 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dp25v"] Oct 14 15:18:39 crc kubenswrapper[4860]: I1014 15:18:39.071742 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48903075-196c-4f29-8246-9e1a3ed97181" path="/var/lib/kubelet/pods/48903075-196c-4f29-8246-9e1a3ed97181/volumes" Oct 14 15:18:39 crc kubenswrapper[4860]: I1014 15:18:39.072340 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d628649b-8b7c-44e8-b047-f00611d715d3" path="/var/lib/kubelet/pods/d628649b-8b7c-44e8-b047-f00611d715d3/volumes" Oct 14 15:18:39 crc kubenswrapper[4860]: I1014 15:18:39.076903 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85cdd63-2581-4545-8740-029dbf61a67a" path="/var/lib/kubelet/pods/f85cdd63-2581-4545-8740-029dbf61a67a/volumes" Oct 14 15:18:47 crc kubenswrapper[4860]: I1014 15:18:47.061861 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:18:47 crc kubenswrapper[4860]: E1014 15:18:47.062675 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:18:48 crc kubenswrapper[4860]: I1014 15:18:48.036829 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ed1d-account-create-hgzrl"] Oct 14 15:18:48 crc kubenswrapper[4860]: I1014 15:18:48.044596 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dedc-account-create-r4p7j"] Oct 14 15:18:48 crc kubenswrapper[4860]: I1014 15:18:48.052274 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ed1d-account-create-hgzrl"] Oct 14 15:18:48 crc kubenswrapper[4860]: I1014 15:18:48.059135 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dedc-account-create-r4p7j"] Oct 14 15:18:49 crc kubenswrapper[4860]: I1014 15:18:49.073188 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b69c0c4-56e5-4100-a195-9d29ebee6719" path="/var/lib/kubelet/pods/1b69c0c4-56e5-4100-a195-9d29ebee6719/volumes" Oct 14 15:18:49 crc kubenswrapper[4860]: I1014 15:18:49.073759 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b586fecc-3089-441e-8efa-8f84641f472b" path="/var/lib/kubelet/pods/b586fecc-3089-441e-8efa-8f84641f472b/volumes" Oct 14 15:18:55 crc kubenswrapper[4860]: I1014 15:18:55.036954 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-846cm"] Oct 14 15:18:55 crc kubenswrapper[4860]: I1014 15:18:55.045023 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-846cm"] Oct 14 15:18:55 crc kubenswrapper[4860]: I1014 15:18:55.072189 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de56759-b727-495a-b9dd-3daa0cd45527" path="/var/lib/kubelet/pods/4de56759-b727-495a-b9dd-3daa0cd45527/volumes" Oct 14 15:18:56 crc kubenswrapper[4860]: I1014 15:18:56.032267 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-d7fsm"] Oct 14 15:18:56 crc kubenswrapper[4860]: I1014 15:18:56.044443 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-d7fsm"] Oct 14 15:18:56 crc kubenswrapper[4860]: I1014 15:18:56.054785 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-gpw62"] Oct 14 15:18:56 crc kubenswrapper[4860]: I1014 15:18:56.063500 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-gpw62"] Oct 14 15:18:57 crc kubenswrapper[4860]: I1014 15:18:57.077228 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532385fd-6404-44f6-93fa-0bfcf9b16662" path="/var/lib/kubelet/pods/532385fd-6404-44f6-93fa-0bfcf9b16662/volumes" Oct 14 15:18:57 crc kubenswrapper[4860]: I1014 15:18:57.077822 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699edce6-a0b8-48e8-b5cb-b27747a6c048" path="/var/lib/kubelet/pods/699edce6-a0b8-48e8-b5cb-b27747a6c048/volumes" Oct 14 15:19:01 crc kubenswrapper[4860]: I1014 15:19:01.028880 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-54f4-account-create-5h8k4"] Oct 14 15:19:01 crc kubenswrapper[4860]: I1014 15:19:01.040230 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-54f4-account-create-5h8k4"] Oct 14 15:19:01 crc kubenswrapper[4860]: I1014 15:19:01.062342 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:19:01 crc kubenswrapper[4860]: I1014 15:19:01.079738 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b" path="/var/lib/kubelet/pods/2e2a8218-fd4f-44d9-b7bc-ae5fead34e2b/volumes" Oct 14 15:19:02 crc kubenswrapper[4860]: I1014 15:19:02.155455 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"88d96a64a7082d356ceb1b7aa3d1e1d3f5289d2d18f169a9ea68443f4df8c882"} Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.531250 4860 scope.go:117] "RemoveContainer" containerID="ec7c31e794c33e06b23a2dd3b32b566620a41a1d670d4af29bee891190af6477" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.579835 4860 scope.go:117] "RemoveContainer" containerID="aaa6299e8b48a1795d7f5674299f6283d9d2d00de1682dd93e75ee818a2af147" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.616833 4860 scope.go:117] "RemoveContainer" containerID="b7723773d1697440c4fb91152ff65041ed188e23139a42462d3f3678a29475e5" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.689259 4860 scope.go:117] "RemoveContainer" containerID="efc97acb9d2b686089130b6db463734768abc1f07bea60efc08e0a4881dce350" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.740005 4860 scope.go:117] "RemoveContainer" containerID="ff96fb9146648fe633755c2438c955e5f50d7c147528254d5bb41cb10da9bb91" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.784177 4860 scope.go:117] "RemoveContainer" containerID="93e481fe258c101046812b948a7a17acc1b8c1ff0071c5e5dcc93a1b3a0b8bed" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.830052 4860 scope.go:117] "RemoveContainer" containerID="d07e1fa94614470d9fd6989a96974096b0feb61cebf557d163110cfc59c308b0" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.856834 4860 scope.go:117] "RemoveContainer" containerID="2f8fc8fc070e7110d5d86fb7183d1aeeea38eff08813acbb6dfdce163df98caf" Oct 14 15:19:15 crc kubenswrapper[4860]: I1014 15:19:15.877983 4860 scope.go:117] "RemoveContainer" containerID="042f7ae01bbb24257b17a0d932c3fa4e2ec3a7f0b0793173f2a32e3eed83a2bc" Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.032000 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6793-account-create-h8c74"] Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.044608 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9386-account-create-bbktn"] Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.053785 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6793-account-create-h8c74"] Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.061098 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-95d5-account-create-26t2x"] Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.073369 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51429dab-2e49-4ee4-8fcf-4ecd0070b5a5" path="/var/lib/kubelet/pods/51429dab-2e49-4ee4-8fcf-4ecd0070b5a5/volumes" Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.074048 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9386-account-create-bbktn"] Oct 14 15:19:17 crc kubenswrapper[4860]: I1014 15:19:17.077508 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-95d5-account-create-26t2x"] Oct 14 15:19:19 crc kubenswrapper[4860]: I1014 15:19:19.080181 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2552725f-0e4f-4766-b564-6c01b225d5d5" path="/var/lib/kubelet/pods/2552725f-0e4f-4766-b564-6c01b225d5d5/volumes" Oct 14 15:19:19 crc kubenswrapper[4860]: I1014 15:19:19.081673 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c09b2e-db9d-4989-9f7a-a0f94458d4d5" path="/var/lib/kubelet/pods/48c09b2e-db9d-4989-9f7a-a0f94458d4d5/volumes" Oct 14 15:19:25 crc kubenswrapper[4860]: I1014 15:19:25.044858 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7dg8n"] Oct 14 15:19:25 crc kubenswrapper[4860]: I1014 15:19:25.053098 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7dg8n"] Oct 14 15:19:25 crc kubenswrapper[4860]: I1014 15:19:25.075453 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca190f5b-bd3e-4628-a10d-6b8de6e826d8" path="/var/lib/kubelet/pods/ca190f5b-bd3e-4628-a10d-6b8de6e826d8/volumes" Oct 14 15:19:26 crc kubenswrapper[4860]: I1014 15:19:26.033952 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-b8tzr"] Oct 14 15:19:26 crc kubenswrapper[4860]: I1014 15:19:26.041563 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-b8tzr"] Oct 14 15:19:27 crc kubenswrapper[4860]: I1014 15:19:27.075053 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b62797-8a97-4db0-a6ca-e7b2172ddb78" path="/var/lib/kubelet/pods/06b62797-8a97-4db0-a6ca-e7b2172ddb78/volumes" Oct 14 15:19:41 crc kubenswrapper[4860]: I1014 15:19:41.492906 4860 generic.go:334] "Generic (PLEG): container finished" podID="18e270ba-e48c-4f9e-bc6a-8269b31f5698" containerID="88906c26996fbfd83fb63b71bfd5f80bd119c19fbd259d8fe62d44418a663d3a" exitCode=0 Oct 14 15:19:41 crc kubenswrapper[4860]: I1014 15:19:41.493010 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" event={"ID":"18e270ba-e48c-4f9e-bc6a-8269b31f5698","Type":"ContainerDied","Data":"88906c26996fbfd83fb63b71bfd5f80bd119c19fbd259d8fe62d44418a663d3a"} Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.879708 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.896665 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-bootstrap-combined-ca-bundle\") pod \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.896956 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmmj6\" (UniqueName: \"kubernetes.io/projected/18e270ba-e48c-4f9e-bc6a-8269b31f5698-kube-api-access-cmmj6\") pod \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.897196 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-inventory\") pod \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.897424 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-ssh-key\") pod \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\" (UID: \"18e270ba-e48c-4f9e-bc6a-8269b31f5698\") " Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.907758 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "18e270ba-e48c-4f9e-bc6a-8269b31f5698" (UID: "18e270ba-e48c-4f9e-bc6a-8269b31f5698"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.907821 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e270ba-e48c-4f9e-bc6a-8269b31f5698-kube-api-access-cmmj6" (OuterVolumeSpecName: "kube-api-access-cmmj6") pod "18e270ba-e48c-4f9e-bc6a-8269b31f5698" (UID: "18e270ba-e48c-4f9e-bc6a-8269b31f5698"). InnerVolumeSpecName "kube-api-access-cmmj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.932692 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "18e270ba-e48c-4f9e-bc6a-8269b31f5698" (UID: "18e270ba-e48c-4f9e-bc6a-8269b31f5698"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.932718 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-inventory" (OuterVolumeSpecName: "inventory") pod "18e270ba-e48c-4f9e-bc6a-8269b31f5698" (UID: "18e270ba-e48c-4f9e-bc6a-8269b31f5698"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.999504 4860 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.999685 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmmj6\" (UniqueName: \"kubernetes.io/projected/18e270ba-e48c-4f9e-bc6a-8269b31f5698-kube-api-access-cmmj6\") on node \"crc\" DevicePath \"\"" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.999743 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:19:42 crc kubenswrapper[4860]: I1014 15:19:42.999818 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18e270ba-e48c-4f9e-bc6a-8269b31f5698-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.510180 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" event={"ID":"18e270ba-e48c-4f9e-bc6a-8269b31f5698","Type":"ContainerDied","Data":"22be694512b686ee5d3ea4aa16c32613a39be30ba02e81478c4da24513815026"} Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.510507 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22be694512b686ee5d3ea4aa16c32613a39be30ba02e81478c4da24513815026" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.510261 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.600643 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq"] Oct 14 15:19:43 crc kubenswrapper[4860]: E1014 15:19:43.601428 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e270ba-e48c-4f9e-bc6a-8269b31f5698" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.601522 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e270ba-e48c-4f9e-bc6a-8269b31f5698" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.601855 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e270ba-e48c-4f9e-bc6a-8269b31f5698" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.602697 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.604886 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.608939 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.608973 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.610507 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5vx\" (UniqueName: \"kubernetes.io/projected/3b6f14ce-02b7-4b0c-91f7-de180b724b23-kube-api-access-xq5vx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.610567 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.610660 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.611178 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.616745 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq"] Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.711539 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq5vx\" (UniqueName: \"kubernetes.io/projected/3b6f14ce-02b7-4b0c-91f7-de180b724b23-kube-api-access-xq5vx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.711606 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.711707 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.716968 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.717885 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.729862 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq5vx\" (UniqueName: \"kubernetes.io/projected/3b6f14ce-02b7-4b0c-91f7-de180b724b23-kube-api-access-xq5vx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:43 crc kubenswrapper[4860]: I1014 15:19:43.923256 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:19:44 crc kubenswrapper[4860]: I1014 15:19:44.460429 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq"] Oct 14 15:19:44 crc kubenswrapper[4860]: I1014 15:19:44.467090 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:19:44 crc kubenswrapper[4860]: I1014 15:19:44.519071 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" event={"ID":"3b6f14ce-02b7-4b0c-91f7-de180b724b23","Type":"ContainerStarted","Data":"55fead694be7af7555318e350fd801ef3d4c22badde9054eabe9a200bdbba163"} Oct 14 15:19:45 crc kubenswrapper[4860]: I1014 15:19:45.529337 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" event={"ID":"3b6f14ce-02b7-4b0c-91f7-de180b724b23","Type":"ContainerStarted","Data":"2dcbbf912e4958130db419c8198239e22666d6dca0246445e948aa4364b29ee7"} Oct 14 15:19:45 crc kubenswrapper[4860]: I1014 15:19:45.555088 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" podStartSLOduration=2.015887074 podStartE2EDuration="2.555068314s" podCreationTimestamp="2025-10-14 15:19:43 +0000 UTC" firstStartedPulling="2025-10-14 15:19:44.466874476 +0000 UTC m=+1846.053657925" lastFinishedPulling="2025-10-14 15:19:45.006055716 +0000 UTC m=+1846.592839165" observedRunningTime="2025-10-14 15:19:45.545186474 +0000 UTC m=+1847.131969923" watchObservedRunningTime="2025-10-14 15:19:45.555068314 +0000 UTC m=+1847.141851763" Oct 14 15:20:14 crc kubenswrapper[4860]: I1014 15:20:14.048065 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dhd74"] Oct 14 15:20:14 crc kubenswrapper[4860]: I1014 15:20:14.062714 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dhd74"] Oct 14 15:20:15 crc kubenswrapper[4860]: I1014 15:20:15.072100 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63dca02-9db5-41e7-90a0-0c19bd729242" path="/var/lib/kubelet/pods/c63dca02-9db5-41e7-90a0-0c19bd729242/volumes" Oct 14 15:20:16 crc kubenswrapper[4860]: I1014 15:20:16.138534 4860 scope.go:117] "RemoveContainer" containerID="954bc4d1818bf622ee8a06144a3b48f2323a934fd95d9db7376cc47b6cd2988a" Oct 14 15:20:16 crc kubenswrapper[4860]: I1014 15:20:16.207301 4860 scope.go:117] "RemoveContainer" containerID="0827df9788072d480b33a545b29a52af3504f7985ba9088314b186c414845895" Oct 14 15:20:16 crc kubenswrapper[4860]: I1014 15:20:16.229692 4860 scope.go:117] "RemoveContainer" containerID="9ff7523dc98d437b2317e74a7a32f9172982d2d775eaef0c4ca7e9b6be523110" Oct 14 15:20:16 crc kubenswrapper[4860]: I1014 15:20:16.283253 4860 scope.go:117] "RemoveContainer" containerID="0898e642f989278d78317aeabfc64b13728aaf1b34251fdc3e2493d641c4b355" Oct 14 15:20:16 crc kubenswrapper[4860]: I1014 15:20:16.340314 4860 scope.go:117] "RemoveContainer" containerID="a29ffdf3ee315aaef1e9be6f2e0e4822c67866303fb1387875c62b37ac9a5a32" Oct 14 15:20:16 crc kubenswrapper[4860]: I1014 15:20:16.376579 4860 scope.go:117] "RemoveContainer" containerID="23b40a675aee4a461bb601653b5aa9d804f82ad109ed6ee85be1f640011fc8ff" Oct 14 15:20:33 crc kubenswrapper[4860]: I1014 15:20:33.057838 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dbsq9"] Oct 14 15:20:33 crc kubenswrapper[4860]: I1014 15:20:33.077704 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wkzht"] Oct 14 15:20:33 crc kubenswrapper[4860]: I1014 15:20:33.093207 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wkzht"] Oct 14 15:20:33 crc kubenswrapper[4860]: I1014 15:20:33.106718 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dbsq9"] Oct 14 15:20:35 crc kubenswrapper[4860]: I1014 15:20:35.071603 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3324c4e1-abc6-473d-8d14-28d41a4e27a8" path="/var/lib/kubelet/pods/3324c4e1-abc6-473d-8d14-28d41a4e27a8/volumes" Oct 14 15:20:35 crc kubenswrapper[4860]: I1014 15:20:35.072516 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8616715a-5ecc-4bec-8e55-14626927cce5" path="/var/lib/kubelet/pods/8616715a-5ecc-4bec-8e55-14626927cce5/volumes" Oct 14 15:20:55 crc kubenswrapper[4860]: I1014 15:20:55.044481 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x2247"] Oct 14 15:20:55 crc kubenswrapper[4860]: I1014 15:20:55.051497 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x2247"] Oct 14 15:20:55 crc kubenswrapper[4860]: I1014 15:20:55.075315 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0a3bc02-1357-4751-9496-a41526515867" path="/var/lib/kubelet/pods/f0a3bc02-1357-4751-9496-a41526515867/volumes" Oct 14 15:20:57 crc kubenswrapper[4860]: I1014 15:20:57.033879 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-grpb9"] Oct 14 15:20:57 crc kubenswrapper[4860]: I1014 15:20:57.042718 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-grpb9"] Oct 14 15:20:57 crc kubenswrapper[4860]: I1014 15:20:57.074359 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca080412-b618-4293-a06d-e0d9a774d36b" path="/var/lib/kubelet/pods/ca080412-b618-4293-a06d-e0d9a774d36b/volumes" Oct 14 15:21:16 crc kubenswrapper[4860]: I1014 15:21:16.572313 4860 scope.go:117] "RemoveContainer" containerID="6469106c14d5090a665c2bbd390f714e5f630ed44a6b1e2a12bb59c850325ed6" Oct 14 15:21:16 crc kubenswrapper[4860]: I1014 15:21:16.616986 4860 scope.go:117] "RemoveContainer" containerID="e8f98dd80c026cf6fee32e32aa25db1f319ad8c9ece42a632eccf1b99c7e00c5" Oct 14 15:21:16 crc kubenswrapper[4860]: I1014 15:21:16.665557 4860 scope.go:117] "RemoveContainer" containerID="5b81caa0fbe5a103584a9a6706463d60e8f0c80f69e219815ce3c43e0ccf8981" Oct 14 15:21:16 crc kubenswrapper[4860]: I1014 15:21:16.708053 4860 scope.go:117] "RemoveContainer" containerID="eeeea6721fbd5ee86c2edf8027ffeb6a868461bddeb38914b49777fdd6c5c4dd" Oct 14 15:21:29 crc kubenswrapper[4860]: I1014 15:21:29.245644 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:21:29 crc kubenswrapper[4860]: I1014 15:21:29.246196 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:21:37 crc kubenswrapper[4860]: I1014 15:21:37.496744 4860 generic.go:334] "Generic (PLEG): container finished" podID="3b6f14ce-02b7-4b0c-91f7-de180b724b23" containerID="2dcbbf912e4958130db419c8198239e22666d6dca0246445e948aa4364b29ee7" exitCode=0 Oct 14 15:21:37 crc kubenswrapper[4860]: I1014 15:21:37.496846 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" event={"ID":"3b6f14ce-02b7-4b0c-91f7-de180b724b23","Type":"ContainerDied","Data":"2dcbbf912e4958130db419c8198239e22666d6dca0246445e948aa4364b29ee7"} Oct 14 15:21:38 crc kubenswrapper[4860]: I1014 15:21:38.947463 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:21:38 crc kubenswrapper[4860]: I1014 15:21:38.971866 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq5vx\" (UniqueName: \"kubernetes.io/projected/3b6f14ce-02b7-4b0c-91f7-de180b724b23-kube-api-access-xq5vx\") pod \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " Oct 14 15:21:38 crc kubenswrapper[4860]: I1014 15:21:38.972197 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-ssh-key\") pod \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " Oct 14 15:21:38 crc kubenswrapper[4860]: I1014 15:21:38.972307 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-inventory\") pod \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\" (UID: \"3b6f14ce-02b7-4b0c-91f7-de180b724b23\") " Oct 14 15:21:38 crc kubenswrapper[4860]: I1014 15:21:38.979860 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6f14ce-02b7-4b0c-91f7-de180b724b23-kube-api-access-xq5vx" (OuterVolumeSpecName: "kube-api-access-xq5vx") pod "3b6f14ce-02b7-4b0c-91f7-de180b724b23" (UID: "3b6f14ce-02b7-4b0c-91f7-de180b724b23"). InnerVolumeSpecName "kube-api-access-xq5vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.017369 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3b6f14ce-02b7-4b0c-91f7-de180b724b23" (UID: "3b6f14ce-02b7-4b0c-91f7-de180b724b23"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.020798 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-inventory" (OuterVolumeSpecName: "inventory") pod "3b6f14ce-02b7-4b0c-91f7-de180b724b23" (UID: "3b6f14ce-02b7-4b0c-91f7-de180b724b23"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.074410 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq5vx\" (UniqueName: \"kubernetes.io/projected/3b6f14ce-02b7-4b0c-91f7-de180b724b23-kube-api-access-xq5vx\") on node \"crc\" DevicePath \"\"" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.075149 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.075161 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f14ce-02b7-4b0c-91f7-de180b724b23-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.515964 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" event={"ID":"3b6f14ce-02b7-4b0c-91f7-de180b724b23","Type":"ContainerDied","Data":"55fead694be7af7555318e350fd801ef3d4c22badde9054eabe9a200bdbba163"} Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.516014 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55fead694be7af7555318e350fd801ef3d4c22badde9054eabe9a200bdbba163" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.516080 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.622375 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq"] Oct 14 15:21:39 crc kubenswrapper[4860]: E1014 15:21:39.622852 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6f14ce-02b7-4b0c-91f7-de180b724b23" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.622873 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6f14ce-02b7-4b0c-91f7-de180b724b23" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.623160 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6f14ce-02b7-4b0c-91f7-de180b724b23" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.623963 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.639207 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.639494 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.639684 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.639958 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.647094 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq"] Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.691845 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.691980 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99mf\" (UniqueName: \"kubernetes.io/projected/72789ed5-d4cd-4245-ad23-5114f65ab462-kube-api-access-q99mf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.692122 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.794163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.794216 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99mf\" (UniqueName: \"kubernetes.io/projected/72789ed5-d4cd-4245-ad23-5114f65ab462-kube-api-access-q99mf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.794259 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.798843 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.799236 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.812530 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99mf\" (UniqueName: \"kubernetes.io/projected/72789ed5-d4cd-4245-ad23-5114f65ab462-kube-api-access-q99mf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:39 crc kubenswrapper[4860]: I1014 15:21:39.943499 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:21:40 crc kubenswrapper[4860]: I1014 15:21:40.547647 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq"] Oct 14 15:21:41 crc kubenswrapper[4860]: I1014 15:21:41.532956 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" event={"ID":"72789ed5-d4cd-4245-ad23-5114f65ab462","Type":"ContainerStarted","Data":"99fdd7be25c9025f272316cef8ec123ca514ba482538fdb98b7f51094696a79e"} Oct 14 15:21:41 crc kubenswrapper[4860]: I1014 15:21:41.533308 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" event={"ID":"72789ed5-d4cd-4245-ad23-5114f65ab462","Type":"ContainerStarted","Data":"3979211fee14b0c73831544f2ee0c3bb2cef9448676da09f54e7874a36eb3b5d"} Oct 14 15:21:41 crc kubenswrapper[4860]: I1014 15:21:41.552647 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" podStartSLOduration=2.371288539 podStartE2EDuration="2.552626917s" podCreationTimestamp="2025-10-14 15:21:39 +0000 UTC" firstStartedPulling="2025-10-14 15:21:40.563014761 +0000 UTC m=+1962.149798210" lastFinishedPulling="2025-10-14 15:21:40.744353139 +0000 UTC m=+1962.331136588" observedRunningTime="2025-10-14 15:21:41.54741305 +0000 UTC m=+1963.134196499" watchObservedRunningTime="2025-10-14 15:21:41.552626917 +0000 UTC m=+1963.139410366" Oct 14 15:21:54 crc kubenswrapper[4860]: I1014 15:21:54.040017 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rblbz"] Oct 14 15:21:54 crc kubenswrapper[4860]: I1014 15:21:54.047234 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kfbxn"] Oct 14 15:21:54 crc kubenswrapper[4860]: I1014 15:21:54.057086 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kfbxn"] Oct 14 15:21:54 crc kubenswrapper[4860]: I1014 15:21:54.066739 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ttngb"] Oct 14 15:21:54 crc kubenswrapper[4860]: I1014 15:21:54.074530 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rblbz"] Oct 14 15:21:54 crc kubenswrapper[4860]: I1014 15:21:54.082492 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ttngb"] Oct 14 15:21:55 crc kubenswrapper[4860]: I1014 15:21:55.072076 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43071b38-c290-49b5-ade1-9bd9c623062b" path="/var/lib/kubelet/pods/43071b38-c290-49b5-ade1-9bd9c623062b/volumes" Oct 14 15:21:55 crc kubenswrapper[4860]: I1014 15:21:55.073054 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5235d3-2655-428f-bad1-a71c041b1254" path="/var/lib/kubelet/pods/ae5235d3-2655-428f-bad1-a71c041b1254/volumes" Oct 14 15:21:55 crc kubenswrapper[4860]: I1014 15:21:55.073554 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4462d4-a6de-4580-bf47-96a2848f3aba" path="/var/lib/kubelet/pods/ce4462d4-a6de-4580-bf47-96a2848f3aba/volumes" Oct 14 15:21:59 crc kubenswrapper[4860]: I1014 15:21:59.245638 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:21:59 crc kubenswrapper[4860]: I1014 15:21:59.246964 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:22:04 crc kubenswrapper[4860]: I1014 15:22:04.030755 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dedf-account-create-q9kv8"] Oct 14 15:22:04 crc kubenswrapper[4860]: I1014 15:22:04.039993 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d11f-account-create-9mhcb"] Oct 14 15:22:04 crc kubenswrapper[4860]: I1014 15:22:04.050460 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d11f-account-create-9mhcb"] Oct 14 15:22:04 crc kubenswrapper[4860]: I1014 15:22:04.059761 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dedf-account-create-q9kv8"] Oct 14 15:22:04 crc kubenswrapper[4860]: I1014 15:22:04.076526 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fa20-account-create-k4vdx"] Oct 14 15:22:04 crc kubenswrapper[4860]: I1014 15:22:04.086008 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fa20-account-create-k4vdx"] Oct 14 15:22:05 crc kubenswrapper[4860]: I1014 15:22:05.072003 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b949f2b-c0a2-4371-b9ee-8eea850586b1" path="/var/lib/kubelet/pods/5b949f2b-c0a2-4371-b9ee-8eea850586b1/volumes" Oct 14 15:22:05 crc kubenswrapper[4860]: I1014 15:22:05.072642 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcfb6eb-044a-4f21-b60b-333306949a88" path="/var/lib/kubelet/pods/dfcfb6eb-044a-4f21-b60b-333306949a88/volumes" Oct 14 15:22:05 crc kubenswrapper[4860]: I1014 15:22:05.073231 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea90be69-850d-4707-8931-d91bed695f91" path="/var/lib/kubelet/pods/ea90be69-850d-4707-8931-d91bed695f91/volumes" Oct 14 15:22:16 crc kubenswrapper[4860]: I1014 15:22:16.837547 4860 scope.go:117] "RemoveContainer" containerID="9041798651ff223962c93c6e70dfb10a5364fba0d0c7248816b7f64a044507fb" Oct 14 15:22:16 crc kubenswrapper[4860]: I1014 15:22:16.896660 4860 scope.go:117] "RemoveContainer" containerID="898235bf8c266bffcb5a89422446f3e7b1654c22c51d41d6e14c4aea790eabfc" Oct 14 15:22:16 crc kubenswrapper[4860]: I1014 15:22:16.932609 4860 scope.go:117] "RemoveContainer" containerID="99420f35be2ab1d26d3d621ada87741153172e0eb6dab6e61286e16d74984f57" Oct 14 15:22:16 crc kubenswrapper[4860]: I1014 15:22:16.968965 4860 scope.go:117] "RemoveContainer" containerID="9a0a214ff333160d39f05c10155786f3ebbf76ef17160ce3067a11df6b4c16be" Oct 14 15:22:17 crc kubenswrapper[4860]: I1014 15:22:17.016166 4860 scope.go:117] "RemoveContainer" containerID="ec320ee991fee7e30ab4ea84e757e483e72571a3789f13b056330b85d8325b8a" Oct 14 15:22:17 crc kubenswrapper[4860]: I1014 15:22:17.063302 4860 scope.go:117] "RemoveContainer" containerID="4768e56088d10074a60b9e0400adb465346357b0c57d0bade63f6456b64becf2" Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.245967 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.247838 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.247974 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.248924 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88d96a64a7082d356ceb1b7aa3d1e1d3f5289d2d18f169a9ea68443f4df8c882"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.249130 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://88d96a64a7082d356ceb1b7aa3d1e1d3f5289d2d18f169a9ea68443f4df8c882" gracePeriod=600 Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.979724 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="88d96a64a7082d356ceb1b7aa3d1e1d3f5289d2d18f169a9ea68443f4df8c882" exitCode=0 Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.979864 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"88d96a64a7082d356ceb1b7aa3d1e1d3f5289d2d18f169a9ea68443f4df8c882"} Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.980361 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484"} Oct 14 15:22:29 crc kubenswrapper[4860]: I1014 15:22:29.980425 4860 scope.go:117] "RemoveContainer" containerID="5b87c9a85c64fd4545c10cd39b83729e1ab6e03d6ca3d3494053e64804bbd642" Oct 14 15:22:30 crc kubenswrapper[4860]: I1014 15:22:30.040254 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvtwm"] Oct 14 15:22:30 crc kubenswrapper[4860]: I1014 15:22:30.052924 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvtwm"] Oct 14 15:22:31 crc kubenswrapper[4860]: I1014 15:22:31.072863 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0a3f5b-875c-49b4-8649-ed231cbb71c0" path="/var/lib/kubelet/pods/3a0a3f5b-875c-49b4-8649-ed231cbb71c0/volumes" Oct 14 15:23:00 crc kubenswrapper[4860]: I1014 15:23:00.041332 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rxm7"] Oct 14 15:23:00 crc kubenswrapper[4860]: I1014 15:23:00.050256 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rxm7"] Oct 14 15:23:01 crc kubenswrapper[4860]: I1014 15:23:01.071276 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17051e3-47fc-4f95-8442-3ff6327fadf7" path="/var/lib/kubelet/pods/d17051e3-47fc-4f95-8442-3ff6327fadf7/volumes" Oct 14 15:23:01 crc kubenswrapper[4860]: I1014 15:23:01.240857 4860 generic.go:334] "Generic (PLEG): container finished" podID="72789ed5-d4cd-4245-ad23-5114f65ab462" containerID="99fdd7be25c9025f272316cef8ec123ca514ba482538fdb98b7f51094696a79e" exitCode=0 Oct 14 15:23:01 crc kubenswrapper[4860]: I1014 15:23:01.240910 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" event={"ID":"72789ed5-d4cd-4245-ad23-5114f65ab462","Type":"ContainerDied","Data":"99fdd7be25c9025f272316cef8ec123ca514ba482538fdb98b7f51094696a79e"} Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.649690 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.832974 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-inventory\") pod \"72789ed5-d4cd-4245-ad23-5114f65ab462\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.833210 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99mf\" (UniqueName: \"kubernetes.io/projected/72789ed5-d4cd-4245-ad23-5114f65ab462-kube-api-access-q99mf\") pod \"72789ed5-d4cd-4245-ad23-5114f65ab462\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.833274 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-ssh-key\") pod \"72789ed5-d4cd-4245-ad23-5114f65ab462\" (UID: \"72789ed5-d4cd-4245-ad23-5114f65ab462\") " Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.842298 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72789ed5-d4cd-4245-ad23-5114f65ab462-kube-api-access-q99mf" (OuterVolumeSpecName: "kube-api-access-q99mf") pod "72789ed5-d4cd-4245-ad23-5114f65ab462" (UID: "72789ed5-d4cd-4245-ad23-5114f65ab462"). InnerVolumeSpecName "kube-api-access-q99mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.862258 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-inventory" (OuterVolumeSpecName: "inventory") pod "72789ed5-d4cd-4245-ad23-5114f65ab462" (UID: "72789ed5-d4cd-4245-ad23-5114f65ab462"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.865285 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72789ed5-d4cd-4245-ad23-5114f65ab462" (UID: "72789ed5-d4cd-4245-ad23-5114f65ab462"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.935718 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99mf\" (UniqueName: \"kubernetes.io/projected/72789ed5-d4cd-4245-ad23-5114f65ab462-kube-api-access-q99mf\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.935785 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:02 crc kubenswrapper[4860]: I1014 15:23:02.935800 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72789ed5-d4cd-4245-ad23-5114f65ab462-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.261459 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" event={"ID":"72789ed5-d4cd-4245-ad23-5114f65ab462","Type":"ContainerDied","Data":"3979211fee14b0c73831544f2ee0c3bb2cef9448676da09f54e7874a36eb3b5d"} Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.261498 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.261510 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3979211fee14b0c73831544f2ee0c3bb2cef9448676da09f54e7874a36eb3b5d" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.357005 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb"] Oct 14 15:23:03 crc kubenswrapper[4860]: E1014 15:23:03.357436 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72789ed5-d4cd-4245-ad23-5114f65ab462" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.357459 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="72789ed5-d4cd-4245-ad23-5114f65ab462" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.357675 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="72789ed5-d4cd-4245-ad23-5114f65ab462" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.358437 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.368975 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb"] Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.372234 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.372296 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.372300 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.372234 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.445609 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.445664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdqj\" (UniqueName: \"kubernetes.io/projected/487e54e1-aee7-4e2c-abdd-903ea61b0b11-kube-api-access-fkdqj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.445825 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.547265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.547351 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.547385 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdqj\" (UniqueName: \"kubernetes.io/projected/487e54e1-aee7-4e2c-abdd-903ea61b0b11-kube-api-access-fkdqj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.550600 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.559470 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.569633 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdqj\" (UniqueName: \"kubernetes.io/projected/487e54e1-aee7-4e2c-abdd-903ea61b0b11-kube-api-access-fkdqj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-22rlb\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:03 crc kubenswrapper[4860]: I1014 15:23:03.675198 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:04 crc kubenswrapper[4860]: I1014 15:23:04.034802 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-svrr2"] Oct 14 15:23:04 crc kubenswrapper[4860]: I1014 15:23:04.048227 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-svrr2"] Oct 14 15:23:04 crc kubenswrapper[4860]: I1014 15:23:04.225892 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb"] Oct 14 15:23:04 crc kubenswrapper[4860]: I1014 15:23:04.270772 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" event={"ID":"487e54e1-aee7-4e2c-abdd-903ea61b0b11","Type":"ContainerStarted","Data":"4c6a34a4b923e83c912735c3af91574678c188891d113b2f118a577b5373eb61"} Oct 14 15:23:05 crc kubenswrapper[4860]: I1014 15:23:05.073353 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0c8cbd-2256-4261-9bf5-a62952d239b4" path="/var/lib/kubelet/pods/4c0c8cbd-2256-4261-9bf5-a62952d239b4/volumes" Oct 14 15:23:05 crc kubenswrapper[4860]: I1014 15:23:05.281705 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" event={"ID":"487e54e1-aee7-4e2c-abdd-903ea61b0b11","Type":"ContainerStarted","Data":"85672c86fd857464c12418bc1f50b1d6205d216aa730c7d9e6c4b9a71fca37b5"} Oct 14 15:23:05 crc kubenswrapper[4860]: I1014 15:23:05.303172 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" podStartSLOduration=1.760623676 podStartE2EDuration="2.303155384s" podCreationTimestamp="2025-10-14 15:23:03 +0000 UTC" firstStartedPulling="2025-10-14 15:23:04.234731572 +0000 UTC m=+2045.821515021" lastFinishedPulling="2025-10-14 15:23:04.77726328 +0000 UTC m=+2046.364046729" observedRunningTime="2025-10-14 15:23:05.296442052 +0000 UTC m=+2046.883225501" watchObservedRunningTime="2025-10-14 15:23:05.303155384 +0000 UTC m=+2046.889938833" Oct 14 15:23:10 crc kubenswrapper[4860]: I1014 15:23:10.322640 4860 generic.go:334] "Generic (PLEG): container finished" podID="487e54e1-aee7-4e2c-abdd-903ea61b0b11" containerID="85672c86fd857464c12418bc1f50b1d6205d216aa730c7d9e6c4b9a71fca37b5" exitCode=0 Oct 14 15:23:10 crc kubenswrapper[4860]: I1014 15:23:10.323350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" event={"ID":"487e54e1-aee7-4e2c-abdd-903ea61b0b11","Type":"ContainerDied","Data":"85672c86fd857464c12418bc1f50b1d6205d216aa730c7d9e6c4b9a71fca37b5"} Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.721229 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.896097 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdqj\" (UniqueName: \"kubernetes.io/projected/487e54e1-aee7-4e2c-abdd-903ea61b0b11-kube-api-access-fkdqj\") pod \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.896288 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-inventory\") pod \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.896331 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-ssh-key\") pod \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\" (UID: \"487e54e1-aee7-4e2c-abdd-903ea61b0b11\") " Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.901297 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487e54e1-aee7-4e2c-abdd-903ea61b0b11-kube-api-access-fkdqj" (OuterVolumeSpecName: "kube-api-access-fkdqj") pod "487e54e1-aee7-4e2c-abdd-903ea61b0b11" (UID: "487e54e1-aee7-4e2c-abdd-903ea61b0b11"). InnerVolumeSpecName "kube-api-access-fkdqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.927079 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "487e54e1-aee7-4e2c-abdd-903ea61b0b11" (UID: "487e54e1-aee7-4e2c-abdd-903ea61b0b11"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.932401 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-inventory" (OuterVolumeSpecName: "inventory") pod "487e54e1-aee7-4e2c-abdd-903ea61b0b11" (UID: "487e54e1-aee7-4e2c-abdd-903ea61b0b11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.998224 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkdqj\" (UniqueName: \"kubernetes.io/projected/487e54e1-aee7-4e2c-abdd-903ea61b0b11-kube-api-access-fkdqj\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.998259 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:11 crc kubenswrapper[4860]: I1014 15:23:11.998271 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/487e54e1-aee7-4e2c-abdd-903ea61b0b11-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.341012 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" event={"ID":"487e54e1-aee7-4e2c-abdd-903ea61b0b11","Type":"ContainerDied","Data":"4c6a34a4b923e83c912735c3af91574678c188891d113b2f118a577b5373eb61"} Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.341426 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c6a34a4b923e83c912735c3af91574678c188891d113b2f118a577b5373eb61" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.341484 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-22rlb" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.448526 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms"] Oct 14 15:23:12 crc kubenswrapper[4860]: E1014 15:23:12.449246 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487e54e1-aee7-4e2c-abdd-903ea61b0b11" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.449341 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="487e54e1-aee7-4e2c-abdd-903ea61b0b11" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.449621 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="487e54e1-aee7-4e2c-abdd-903ea61b0b11" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.450592 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.454267 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.454680 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.454363 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.454399 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.461293 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms"] Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.513500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.513687 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.513849 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6jd\" (UniqueName: \"kubernetes.io/projected/1e540b72-fca1-4c14-8830-8fa070543f8c-kube-api-access-2k6jd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.615934 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k6jd\" (UniqueName: \"kubernetes.io/projected/1e540b72-fca1-4c14-8830-8fa070543f8c-kube-api-access-2k6jd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.616452 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.616603 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.620574 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.627551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.637631 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k6jd\" (UniqueName: \"kubernetes.io/projected/1e540b72-fca1-4c14-8830-8fa070543f8c-kube-api-access-2k6jd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dq8ms\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:12 crc kubenswrapper[4860]: I1014 15:23:12.769550 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:13 crc kubenswrapper[4860]: I1014 15:23:13.339884 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms"] Oct 14 15:23:14 crc kubenswrapper[4860]: I1014 15:23:14.365480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" event={"ID":"1e540b72-fca1-4c14-8830-8fa070543f8c","Type":"ContainerStarted","Data":"f1cf643add58b06ec232798aecfc818ce4c9fdf46a8a6cb0213510bca115355e"} Oct 14 15:23:14 crc kubenswrapper[4860]: I1014 15:23:14.365526 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" event={"ID":"1e540b72-fca1-4c14-8830-8fa070543f8c","Type":"ContainerStarted","Data":"7378014fa0e61a839eced990bf5521f3142a6fd2fd7de84d76eb7c33d3807e12"} Oct 14 15:23:14 crc kubenswrapper[4860]: I1014 15:23:14.395066 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" podStartSLOduration=2.257464374 podStartE2EDuration="2.395044763s" podCreationTimestamp="2025-10-14 15:23:12 +0000 UTC" firstStartedPulling="2025-10-14 15:23:13.353511864 +0000 UTC m=+2054.940295313" lastFinishedPulling="2025-10-14 15:23:13.491092253 +0000 UTC m=+2055.077875702" observedRunningTime="2025-10-14 15:23:14.387933901 +0000 UTC m=+2055.974717360" watchObservedRunningTime="2025-10-14 15:23:14.395044763 +0000 UTC m=+2055.981828212" Oct 14 15:23:17 crc kubenswrapper[4860]: I1014 15:23:17.200699 4860 scope.go:117] "RemoveContainer" containerID="6ad05a8e79f65e07a7c0435d2adc192c5d9aa1507b50627b41120ab66467bb0e" Oct 14 15:23:17 crc kubenswrapper[4860]: I1014 15:23:17.251459 4860 scope.go:117] "RemoveContainer" containerID="805c9a11570b79e1fefdae7d7c88096c58886655c8ef02bbaef2390a66ce2b30" Oct 14 15:23:17 crc kubenswrapper[4860]: I1014 15:23:17.329796 4860 scope.go:117] "RemoveContainer" containerID="8f4946dea223cabde57677246514f3688d78e686b5c2061b1d6b8dc08b54640e" Oct 14 15:23:44 crc kubenswrapper[4860]: I1014 15:23:44.042682 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wdb8w"] Oct 14 15:23:44 crc kubenswrapper[4860]: I1014 15:23:44.050797 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wdb8w"] Oct 14 15:23:45 crc kubenswrapper[4860]: I1014 15:23:45.074542 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fd2566-3969-4d61-9bf0-9944df693a16" path="/var/lib/kubelet/pods/26fd2566-3969-4d61-9bf0-9944df693a16/volumes" Oct 14 15:23:47 crc kubenswrapper[4860]: I1014 15:23:47.791923 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sb7kt"] Oct 14 15:23:47 crc kubenswrapper[4860]: I1014 15:23:47.795480 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:47 crc kubenswrapper[4860]: I1014 15:23:47.808069 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sb7kt"] Oct 14 15:23:47 crc kubenswrapper[4860]: I1014 15:23:47.991326 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-utilities\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:47 crc kubenswrapper[4860]: I1014 15:23:47.991401 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gt99\" (UniqueName: \"kubernetes.io/projected/17a750b8-880b-4623-8629-ca48c942292c-kube-api-access-9gt99\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:47 crc kubenswrapper[4860]: I1014 15:23:47.991465 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-catalog-content\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.094267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-utilities\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.094360 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gt99\" (UniqueName: \"kubernetes.io/projected/17a750b8-880b-4623-8629-ca48c942292c-kube-api-access-9gt99\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.094443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-catalog-content\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.094985 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-catalog-content\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.095986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-utilities\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.126910 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gt99\" (UniqueName: \"kubernetes.io/projected/17a750b8-880b-4623-8629-ca48c942292c-kube-api-access-9gt99\") pod \"redhat-operators-sb7kt\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.130448 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:48 crc kubenswrapper[4860]: I1014 15:23:48.669173 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sb7kt"] Oct 14 15:23:49 crc kubenswrapper[4860]: I1014 15:23:49.661252 4860 generic.go:334] "Generic (PLEG): container finished" podID="17a750b8-880b-4623-8629-ca48c942292c" containerID="6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01" exitCode=0 Oct 14 15:23:49 crc kubenswrapper[4860]: I1014 15:23:49.661386 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerDied","Data":"6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01"} Oct 14 15:23:49 crc kubenswrapper[4860]: I1014 15:23:49.661597 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerStarted","Data":"ce3fe29758a5faca48dc98fca05032146dcb73dcad6d1266903e522c3c55e814"} Oct 14 15:23:50 crc kubenswrapper[4860]: I1014 15:23:50.672321 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerStarted","Data":"aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6"} Oct 14 15:23:56 crc kubenswrapper[4860]: I1014 15:23:56.742588 4860 generic.go:334] "Generic (PLEG): container finished" podID="1e540b72-fca1-4c14-8830-8fa070543f8c" containerID="f1cf643add58b06ec232798aecfc818ce4c9fdf46a8a6cb0213510bca115355e" exitCode=0 Oct 14 15:23:56 crc kubenswrapper[4860]: I1014 15:23:56.742664 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" event={"ID":"1e540b72-fca1-4c14-8830-8fa070543f8c","Type":"ContainerDied","Data":"f1cf643add58b06ec232798aecfc818ce4c9fdf46a8a6cb0213510bca115355e"} Oct 14 15:23:56 crc kubenswrapper[4860]: I1014 15:23:56.745685 4860 generic.go:334] "Generic (PLEG): container finished" podID="17a750b8-880b-4623-8629-ca48c942292c" containerID="aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6" exitCode=0 Oct 14 15:23:56 crc kubenswrapper[4860]: I1014 15:23:56.745709 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerDied","Data":"aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6"} Oct 14 15:23:57 crc kubenswrapper[4860]: I1014 15:23:57.755792 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerStarted","Data":"51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e"} Oct 14 15:23:57 crc kubenswrapper[4860]: I1014 15:23:57.785443 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sb7kt" podStartSLOduration=3.002834655 podStartE2EDuration="10.785423555s" podCreationTimestamp="2025-10-14 15:23:47 +0000 UTC" firstStartedPulling="2025-10-14 15:23:49.66478752 +0000 UTC m=+2091.251570969" lastFinishedPulling="2025-10-14 15:23:57.44737642 +0000 UTC m=+2099.034159869" observedRunningTime="2025-10-14 15:23:57.775542366 +0000 UTC m=+2099.362325825" watchObservedRunningTime="2025-10-14 15:23:57.785423555 +0000 UTC m=+2099.372207004" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.130914 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.131490 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.230434 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.274867 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-ssh-key\") pod \"1e540b72-fca1-4c14-8830-8fa070543f8c\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.275142 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-inventory\") pod \"1e540b72-fca1-4c14-8830-8fa070543f8c\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.275318 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k6jd\" (UniqueName: \"kubernetes.io/projected/1e540b72-fca1-4c14-8830-8fa070543f8c-kube-api-access-2k6jd\") pod \"1e540b72-fca1-4c14-8830-8fa070543f8c\" (UID: \"1e540b72-fca1-4c14-8830-8fa070543f8c\") " Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.300544 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e540b72-fca1-4c14-8830-8fa070543f8c-kube-api-access-2k6jd" (OuterVolumeSpecName: "kube-api-access-2k6jd") pod "1e540b72-fca1-4c14-8830-8fa070543f8c" (UID: "1e540b72-fca1-4c14-8830-8fa070543f8c"). InnerVolumeSpecName "kube-api-access-2k6jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.323066 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-inventory" (OuterVolumeSpecName: "inventory") pod "1e540b72-fca1-4c14-8830-8fa070543f8c" (UID: "1e540b72-fca1-4c14-8830-8fa070543f8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.326088 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e540b72-fca1-4c14-8830-8fa070543f8c" (UID: "1e540b72-fca1-4c14-8830-8fa070543f8c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.376858 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k6jd\" (UniqueName: \"kubernetes.io/projected/1e540b72-fca1-4c14-8830-8fa070543f8c-kube-api-access-2k6jd\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.376890 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.376899 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e540b72-fca1-4c14-8830-8fa070543f8c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.763964 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.763972 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dq8ms" event={"ID":"1e540b72-fca1-4c14-8830-8fa070543f8c","Type":"ContainerDied","Data":"7378014fa0e61a839eced990bf5521f3142a6fd2fd7de84d76eb7c33d3807e12"} Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.764505 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7378014fa0e61a839eced990bf5521f3142a6fd2fd7de84d76eb7c33d3807e12" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.926795 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6"] Oct 14 15:23:58 crc kubenswrapper[4860]: E1014 15:23:58.927244 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e540b72-fca1-4c14-8830-8fa070543f8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.927265 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e540b72-fca1-4c14-8830-8fa070543f8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.927490 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e540b72-fca1-4c14-8830-8fa070543f8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.928201 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.931402 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.931482 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.931842 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.931891 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.940653 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6"] Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.988588 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.988870 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:58 crc kubenswrapper[4860]: I1014 15:23:58.989058 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gsh\" (UniqueName: \"kubernetes.io/projected/fd03522b-4930-4c43-ae91-76bd6891424a-kube-api-access-c9gsh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.090552 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.090669 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.090711 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9gsh\" (UniqueName: \"kubernetes.io/projected/fd03522b-4930-4c43-ae91-76bd6891424a-kube-api-access-c9gsh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.092976 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.094845 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.104840 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.105244 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.123856 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9gsh\" (UniqueName: \"kubernetes.io/projected/fd03522b-4930-4c43-ae91-76bd6891424a-kube-api-access-c9gsh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.179340 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:23:59 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:23:59 crc kubenswrapper[4860]: > Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.249006 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.258100 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.334296 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hxxkn"] Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.343587 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.366344 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxxkn"] Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.500441 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-utilities\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.500554 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-catalog-content\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.500723 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn9n6\" (UniqueName: \"kubernetes.io/projected/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-kube-api-access-cn9n6\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.601902 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn9n6\" (UniqueName: \"kubernetes.io/projected/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-kube-api-access-cn9n6\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.602009 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-utilities\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.602098 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-catalog-content\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.602951 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-utilities\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.603106 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-catalog-content\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.629357 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn9n6\" (UniqueName: \"kubernetes.io/projected/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-kube-api-access-cn9n6\") pod \"community-operators-hxxkn\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:23:59 crc kubenswrapper[4860]: I1014 15:23:59.761820 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.044196 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6"] Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.412025 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.448936 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxxkn"] Oct 14 15:24:00 crc kubenswrapper[4860]: W1014 15:24:00.454991 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb66b76a5_9a3e_4185_9ef7_9c344b553f6e.slice/crio-532422d65c11deb0f08bfea556df88b518fb9b22068144de1fe3804e05bde0e2 WatchSource:0}: Error finding container 532422d65c11deb0f08bfea556df88b518fb9b22068144de1fe3804e05bde0e2: Status 404 returned error can't find the container with id 532422d65c11deb0f08bfea556df88b518fb9b22068144de1fe3804e05bde0e2 Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.808409 4860 generic.go:334] "Generic (PLEG): container finished" podID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerID="a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b" exitCode=0 Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.808483 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerDied","Data":"a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b"} Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.808737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerStarted","Data":"532422d65c11deb0f08bfea556df88b518fb9b22068144de1fe3804e05bde0e2"} Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.813123 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" event={"ID":"fd03522b-4930-4c43-ae91-76bd6891424a","Type":"ContainerStarted","Data":"8f42dc5f1fd2393961600e27d16bcdcf813d428c7d36358eafb9fc7e67d68660"} Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.813163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" event={"ID":"fd03522b-4930-4c43-ae91-76bd6891424a","Type":"ContainerStarted","Data":"b59a9a4eef9051aba08e9db77758fdc25f7661b1733e1284d683cfedfbaba885"} Oct 14 15:24:00 crc kubenswrapper[4860]: I1014 15:24:00.849318 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" podStartSLOduration=2.5923221610000002 podStartE2EDuration="2.849297878s" podCreationTimestamp="2025-10-14 15:23:58 +0000 UTC" firstStartedPulling="2025-10-14 15:24:00.148304655 +0000 UTC m=+2101.735088104" lastFinishedPulling="2025-10-14 15:24:00.405280372 +0000 UTC m=+2101.992063821" observedRunningTime="2025-10-14 15:24:00.840359042 +0000 UTC m=+2102.427142491" watchObservedRunningTime="2025-10-14 15:24:00.849297878 +0000 UTC m=+2102.436081327" Oct 14 15:24:03 crc kubenswrapper[4860]: I1014 15:24:03.850603 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerStarted","Data":"ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939"} Oct 14 15:24:08 crc kubenswrapper[4860]: I1014 15:24:08.895834 4860 generic.go:334] "Generic (PLEG): container finished" podID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerID="ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939" exitCode=0 Oct 14 15:24:08 crc kubenswrapper[4860]: I1014 15:24:08.895907 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerDied","Data":"ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939"} Oct 14 15:24:09 crc kubenswrapper[4860]: I1014 15:24:09.183899 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:09 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:09 crc kubenswrapper[4860]: > Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.005178 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxrk"] Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.007651 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.013995 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxrk"] Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.120520 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-catalog-content\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.120925 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-utilities\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.121016 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7cf\" (UniqueName: \"kubernetes.io/projected/fde73573-e4b4-4aef-88dc-593562596aa5-kube-api-access-bg7cf\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.224234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-catalog-content\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.224938 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-catalog-content\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.225321 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-utilities\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.225507 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-utilities\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.225923 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg7cf\" (UniqueName: \"kubernetes.io/projected/fde73573-e4b4-4aef-88dc-593562596aa5-kube-api-access-bg7cf\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.251743 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg7cf\" (UniqueName: \"kubernetes.io/projected/fde73573-e4b4-4aef-88dc-593562596aa5-kube-api-access-bg7cf\") pod \"redhat-marketplace-jzxrk\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.340371 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.921830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerStarted","Data":"39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527"} Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.948697 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxrk"] Oct 14 15:24:10 crc kubenswrapper[4860]: I1014 15:24:10.971958 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hxxkn" podStartSLOduration=2.997324872 podStartE2EDuration="11.971916094s" podCreationTimestamp="2025-10-14 15:23:59 +0000 UTC" firstStartedPulling="2025-10-14 15:24:00.810919087 +0000 UTC m=+2102.397702536" lastFinishedPulling="2025-10-14 15:24:09.785510309 +0000 UTC m=+2111.372293758" observedRunningTime="2025-10-14 15:24:10.965694282 +0000 UTC m=+2112.552477751" watchObservedRunningTime="2025-10-14 15:24:10.971916094 +0000 UTC m=+2112.558699543" Oct 14 15:24:11 crc kubenswrapper[4860]: I1014 15:24:11.932409 4860 generic.go:334] "Generic (PLEG): container finished" podID="fde73573-e4b4-4aef-88dc-593562596aa5" containerID="e8626437f820692b5e470bc3969a29938fb4a740a274f448d6843aa84803c2b8" exitCode=0 Oct 14 15:24:11 crc kubenswrapper[4860]: I1014 15:24:11.932510 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerDied","Data":"e8626437f820692b5e470bc3969a29938fb4a740a274f448d6843aa84803c2b8"} Oct 14 15:24:11 crc kubenswrapper[4860]: I1014 15:24:11.932798 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerStarted","Data":"6b4189297e2225d0517a03f14a6f25b61f519c6d6bf310825753b99800d52ad1"} Oct 14 15:24:12 crc kubenswrapper[4860]: I1014 15:24:12.944255 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerStarted","Data":"d612037578209b94b1750484ed9d6914d31438db37e08e722b2052ac5ec92b27"} Oct 14 15:24:13 crc kubenswrapper[4860]: I1014 15:24:13.956378 4860 generic.go:334] "Generic (PLEG): container finished" podID="fde73573-e4b4-4aef-88dc-593562596aa5" containerID="d612037578209b94b1750484ed9d6914d31438db37e08e722b2052ac5ec92b27" exitCode=0 Oct 14 15:24:13 crc kubenswrapper[4860]: I1014 15:24:13.956515 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerDied","Data":"d612037578209b94b1750484ed9d6914d31438db37e08e722b2052ac5ec92b27"} Oct 14 15:24:14 crc kubenswrapper[4860]: I1014 15:24:14.969776 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerStarted","Data":"f20eb10d3b6fa12231000fcd849cab2a5f7941dda36e1981d097ff02c572e505"} Oct 14 15:24:16 crc kubenswrapper[4860]: I1014 15:24:16.000641 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jzxrk" podStartSLOduration=4.340662324 podStartE2EDuration="7.000608314s" podCreationTimestamp="2025-10-14 15:24:09 +0000 UTC" firstStartedPulling="2025-10-14 15:24:11.934457685 +0000 UTC m=+2113.521241134" lastFinishedPulling="2025-10-14 15:24:14.594403675 +0000 UTC m=+2116.181187124" observedRunningTime="2025-10-14 15:24:15.99713213 +0000 UTC m=+2117.583915589" watchObservedRunningTime="2025-10-14 15:24:16.000608314 +0000 UTC m=+2117.587391763" Oct 14 15:24:17 crc kubenswrapper[4860]: I1014 15:24:17.402933 4860 scope.go:117] "RemoveContainer" containerID="c3e740fdc4bd5da9ab7bf4ed22df472ae6d7eefa0ff590e11d50f493c88d0647" Oct 14 15:24:19 crc kubenswrapper[4860]: I1014 15:24:19.187362 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:19 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:19 crc kubenswrapper[4860]: > Oct 14 15:24:19 crc kubenswrapper[4860]: I1014 15:24:19.762653 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:19 crc kubenswrapper[4860]: I1014 15:24:19.762712 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:20 crc kubenswrapper[4860]: I1014 15:24:20.341468 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:20 crc kubenswrapper[4860]: I1014 15:24:20.341847 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:20 crc kubenswrapper[4860]: I1014 15:24:20.399524 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:20 crc kubenswrapper[4860]: I1014 15:24:20.810714 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hxxkn" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:20 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:20 crc kubenswrapper[4860]: > Oct 14 15:24:21 crc kubenswrapper[4860]: I1014 15:24:21.085355 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:21 crc kubenswrapper[4860]: I1014 15:24:21.142301 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxrk"] Oct 14 15:24:23 crc kubenswrapper[4860]: I1014 15:24:23.048773 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jzxrk" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="registry-server" containerID="cri-o://f20eb10d3b6fa12231000fcd849cab2a5f7941dda36e1981d097ff02c572e505" gracePeriod=2 Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.065356 4860 generic.go:334] "Generic (PLEG): container finished" podID="fde73573-e4b4-4aef-88dc-593562596aa5" containerID="f20eb10d3b6fa12231000fcd849cab2a5f7941dda36e1981d097ff02c572e505" exitCode=0 Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.065406 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerDied","Data":"f20eb10d3b6fa12231000fcd849cab2a5f7941dda36e1981d097ff02c572e505"} Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.347391 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.439240 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg7cf\" (UniqueName: \"kubernetes.io/projected/fde73573-e4b4-4aef-88dc-593562596aa5-kube-api-access-bg7cf\") pod \"fde73573-e4b4-4aef-88dc-593562596aa5\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.440407 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-utilities\") pod \"fde73573-e4b4-4aef-88dc-593562596aa5\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.440689 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-catalog-content\") pod \"fde73573-e4b4-4aef-88dc-593562596aa5\" (UID: \"fde73573-e4b4-4aef-88dc-593562596aa5\") " Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.441087 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-utilities" (OuterVolumeSpecName: "utilities") pod "fde73573-e4b4-4aef-88dc-593562596aa5" (UID: "fde73573-e4b4-4aef-88dc-593562596aa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.441964 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.446253 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde73573-e4b4-4aef-88dc-593562596aa5-kube-api-access-bg7cf" (OuterVolumeSpecName: "kube-api-access-bg7cf") pod "fde73573-e4b4-4aef-88dc-593562596aa5" (UID: "fde73573-e4b4-4aef-88dc-593562596aa5"). InnerVolumeSpecName "kube-api-access-bg7cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.453649 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fde73573-e4b4-4aef-88dc-593562596aa5" (UID: "fde73573-e4b4-4aef-88dc-593562596aa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.544454 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde73573-e4b4-4aef-88dc-593562596aa5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:24:24 crc kubenswrapper[4860]: I1014 15:24:24.544514 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg7cf\" (UniqueName: \"kubernetes.io/projected/fde73573-e4b4-4aef-88dc-593562596aa5-kube-api-access-bg7cf\") on node \"crc\" DevicePath \"\"" Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.077691 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxrk" event={"ID":"fde73573-e4b4-4aef-88dc-593562596aa5","Type":"ContainerDied","Data":"6b4189297e2225d0517a03f14a6f25b61f519c6d6bf310825753b99800d52ad1"} Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.077745 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxrk" Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.077749 4860 scope.go:117] "RemoveContainer" containerID="f20eb10d3b6fa12231000fcd849cab2a5f7941dda36e1981d097ff02c572e505" Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.107855 4860 scope.go:117] "RemoveContainer" containerID="d612037578209b94b1750484ed9d6914d31438db37e08e722b2052ac5ec92b27" Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.123142 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxrk"] Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.130634 4860 scope.go:117] "RemoveContainer" containerID="e8626437f820692b5e470bc3969a29938fb4a740a274f448d6843aa84803c2b8" Oct 14 15:24:25 crc kubenswrapper[4860]: I1014 15:24:25.133903 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxrk"] Oct 14 15:24:27 crc kubenswrapper[4860]: I1014 15:24:27.073588 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" path="/var/lib/kubelet/pods/fde73573-e4b4-4aef-88dc-593562596aa5/volumes" Oct 14 15:24:29 crc kubenswrapper[4860]: I1014 15:24:29.192183 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:29 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:29 crc kubenswrapper[4860]: > Oct 14 15:24:29 crc kubenswrapper[4860]: I1014 15:24:29.246096 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:24:29 crc kubenswrapper[4860]: I1014 15:24:29.246169 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:24:30 crc kubenswrapper[4860]: I1014 15:24:30.806516 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hxxkn" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:30 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:30 crc kubenswrapper[4860]: > Oct 14 15:24:39 crc kubenswrapper[4860]: I1014 15:24:39.178870 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:39 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:39 crc kubenswrapper[4860]: > Oct 14 15:24:39 crc kubenswrapper[4860]: I1014 15:24:39.811395 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:39 crc kubenswrapper[4860]: I1014 15:24:39.863446 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:40 crc kubenswrapper[4860]: I1014 15:24:40.052912 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxxkn"] Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.201263 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hxxkn" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="registry-server" containerID="cri-o://39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527" gracePeriod=2 Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.688535 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.802601 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn9n6\" (UniqueName: \"kubernetes.io/projected/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-kube-api-access-cn9n6\") pod \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.802708 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-catalog-content\") pod \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.802879 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-utilities\") pod \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\" (UID: \"b66b76a5-9a3e-4185-9ef7-9c344b553f6e\") " Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.803925 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-utilities" (OuterVolumeSpecName: "utilities") pod "b66b76a5-9a3e-4185-9ef7-9c344b553f6e" (UID: "b66b76a5-9a3e-4185-9ef7-9c344b553f6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.810955 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-kube-api-access-cn9n6" (OuterVolumeSpecName: "kube-api-access-cn9n6") pod "b66b76a5-9a3e-4185-9ef7-9c344b553f6e" (UID: "b66b76a5-9a3e-4185-9ef7-9c344b553f6e"). InnerVolumeSpecName "kube-api-access-cn9n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.852362 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b66b76a5-9a3e-4185-9ef7-9c344b553f6e" (UID: "b66b76a5-9a3e-4185-9ef7-9c344b553f6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.905830 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.906215 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn9n6\" (UniqueName: \"kubernetes.io/projected/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-kube-api-access-cn9n6\") on node \"crc\" DevicePath \"\"" Oct 14 15:24:41 crc kubenswrapper[4860]: I1014 15:24:41.906344 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66b76a5-9a3e-4185-9ef7-9c344b553f6e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.210575 4860 generic.go:334] "Generic (PLEG): container finished" podID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerID="39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527" exitCode=0 Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.210630 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxkn" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.210631 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerDied","Data":"39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527"} Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.210722 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxkn" event={"ID":"b66b76a5-9a3e-4185-9ef7-9c344b553f6e","Type":"ContainerDied","Data":"532422d65c11deb0f08bfea556df88b518fb9b22068144de1fe3804e05bde0e2"} Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.210740 4860 scope.go:117] "RemoveContainer" containerID="39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.238696 4860 scope.go:117] "RemoveContainer" containerID="ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.249453 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxxkn"] Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.257090 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hxxkn"] Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.262383 4860 scope.go:117] "RemoveContainer" containerID="a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.323135 4860 scope.go:117] "RemoveContainer" containerID="39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527" Oct 14 15:24:42 crc kubenswrapper[4860]: E1014 15:24:42.323925 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527\": container with ID starting with 39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527 not found: ID does not exist" containerID="39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.324174 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527"} err="failed to get container status \"39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527\": rpc error: code = NotFound desc = could not find container \"39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527\": container with ID starting with 39e454564eb443df0b0e1b7d92611a051c947dfae35ddffea9e546024be83527 not found: ID does not exist" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.324207 4860 scope.go:117] "RemoveContainer" containerID="ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939" Oct 14 15:24:42 crc kubenswrapper[4860]: E1014 15:24:42.326135 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939\": container with ID starting with ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939 not found: ID does not exist" containerID="ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.326169 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939"} err="failed to get container status \"ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939\": rpc error: code = NotFound desc = could not find container \"ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939\": container with ID starting with ee535582d775b8c67a2f73d74c0af4f175bd06071cc706e05b91670cc2577939 not found: ID does not exist" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.326188 4860 scope.go:117] "RemoveContainer" containerID="a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b" Oct 14 15:24:42 crc kubenswrapper[4860]: E1014 15:24:42.326547 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b\": container with ID starting with a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b not found: ID does not exist" containerID="a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b" Oct 14 15:24:42 crc kubenswrapper[4860]: I1014 15:24:42.326592 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b"} err="failed to get container status \"a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b\": rpc error: code = NotFound desc = could not find container \"a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b\": container with ID starting with a6075b5c4333e67c69d445630bdd500fe4c7c01907f508c4f0b429737d851e8b not found: ID does not exist" Oct 14 15:24:43 crc kubenswrapper[4860]: I1014 15:24:43.081294 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" path="/var/lib/kubelet/pods/b66b76a5-9a3e-4185-9ef7-9c344b553f6e/volumes" Oct 14 15:24:49 crc kubenswrapper[4860]: I1014 15:24:49.183874 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:49 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:49 crc kubenswrapper[4860]: > Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.152877 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bx9t7"] Oct 14 15:24:50 crc kubenswrapper[4860]: E1014 15:24:50.153305 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="registry-server" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153321 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="registry-server" Oct 14 15:24:50 crc kubenswrapper[4860]: E1014 15:24:50.153343 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="extract-utilities" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153349 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="extract-utilities" Oct 14 15:24:50 crc kubenswrapper[4860]: E1014 15:24:50.153365 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="extract-content" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153371 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="extract-content" Oct 14 15:24:50 crc kubenswrapper[4860]: E1014 15:24:50.153385 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="registry-server" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153390 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="registry-server" Oct 14 15:24:50 crc kubenswrapper[4860]: E1014 15:24:50.153414 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="extract-utilities" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153420 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="extract-utilities" Oct 14 15:24:50 crc kubenswrapper[4860]: E1014 15:24:50.153435 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="extract-content" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153442 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="extract-content" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153674 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde73573-e4b4-4aef-88dc-593562596aa5" containerName="registry-server" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.153701 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66b76a5-9a3e-4185-9ef7-9c344b553f6e" containerName="registry-server" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.155241 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.161672 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bx9t7"] Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.301768 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-catalog-content\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.302554 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54wp\" (UniqueName: \"kubernetes.io/projected/317fb3d6-ef8d-4085-8e9d-444d045b211b-kube-api-access-l54wp\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.302778 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-utilities\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.404570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-catalog-content\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.404988 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l54wp\" (UniqueName: \"kubernetes.io/projected/317fb3d6-ef8d-4085-8e9d-444d045b211b-kube-api-access-l54wp\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.405110 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-utilities\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.405135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-catalog-content\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.405384 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-utilities\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.431915 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l54wp\" (UniqueName: \"kubernetes.io/projected/317fb3d6-ef8d-4085-8e9d-444d045b211b-kube-api-access-l54wp\") pod \"certified-operators-bx9t7\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:50 crc kubenswrapper[4860]: I1014 15:24:50.490317 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:24:51 crc kubenswrapper[4860]: I1014 15:24:51.106802 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bx9t7"] Oct 14 15:24:51 crc kubenswrapper[4860]: I1014 15:24:51.287066 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerStarted","Data":"2712a2a628e70d566ce7200b1754bc1b0604e4d13aa0d1ddcffc5cb11104e001"} Oct 14 15:24:52 crc kubenswrapper[4860]: I1014 15:24:52.297182 4860 generic.go:334] "Generic (PLEG): container finished" podID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerID="6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e" exitCode=0 Oct 14 15:24:52 crc kubenswrapper[4860]: I1014 15:24:52.297266 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerDied","Data":"6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e"} Oct 14 15:24:52 crc kubenswrapper[4860]: I1014 15:24:52.300703 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:24:53 crc kubenswrapper[4860]: I1014 15:24:53.310803 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerStarted","Data":"4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1"} Oct 14 15:24:55 crc kubenswrapper[4860]: I1014 15:24:55.328219 4860 generic.go:334] "Generic (PLEG): container finished" podID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerID="4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1" exitCode=0 Oct 14 15:24:55 crc kubenswrapper[4860]: I1014 15:24:55.328297 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerDied","Data":"4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1"} Oct 14 15:24:56 crc kubenswrapper[4860]: I1014 15:24:56.343219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerStarted","Data":"3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b"} Oct 14 15:24:56 crc kubenswrapper[4860]: I1014 15:24:56.374424 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bx9t7" podStartSLOduration=2.6883905070000003 podStartE2EDuration="6.37437943s" podCreationTimestamp="2025-10-14 15:24:50 +0000 UTC" firstStartedPulling="2025-10-14 15:24:52.300105744 +0000 UTC m=+2153.886889193" lastFinishedPulling="2025-10-14 15:24:55.986094667 +0000 UTC m=+2157.572878116" observedRunningTime="2025-10-14 15:24:56.364735516 +0000 UTC m=+2157.951518965" watchObservedRunningTime="2025-10-14 15:24:56.37437943 +0000 UTC m=+2157.961162879" Oct 14 15:24:59 crc kubenswrapper[4860]: I1014 15:24:59.174596 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:24:59 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:24:59 crc kubenswrapper[4860]: > Oct 14 15:24:59 crc kubenswrapper[4860]: I1014 15:24:59.245845 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:24:59 crc kubenswrapper[4860]: I1014 15:24:59.245912 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:25:00 crc kubenswrapper[4860]: I1014 15:25:00.403756 4860 generic.go:334] "Generic (PLEG): container finished" podID="fd03522b-4930-4c43-ae91-76bd6891424a" containerID="8f42dc5f1fd2393961600e27d16bcdcf813d428c7d36358eafb9fc7e67d68660" exitCode=2 Oct 14 15:25:00 crc kubenswrapper[4860]: I1014 15:25:00.404153 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" event={"ID":"fd03522b-4930-4c43-ae91-76bd6891424a","Type":"ContainerDied","Data":"8f42dc5f1fd2393961600e27d16bcdcf813d428c7d36358eafb9fc7e67d68660"} Oct 14 15:25:00 crc kubenswrapper[4860]: I1014 15:25:00.491623 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:25:00 crc kubenswrapper[4860]: I1014 15:25:00.491841 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.543940 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bx9t7" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="registry-server" probeResult="failure" output=< Oct 14 15:25:01 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:25:01 crc kubenswrapper[4860]: > Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.824366 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.939167 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-inventory\") pod \"fd03522b-4930-4c43-ae91-76bd6891424a\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.939233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9gsh\" (UniqueName: \"kubernetes.io/projected/fd03522b-4930-4c43-ae91-76bd6891424a-kube-api-access-c9gsh\") pod \"fd03522b-4930-4c43-ae91-76bd6891424a\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.939275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-ssh-key\") pod \"fd03522b-4930-4c43-ae91-76bd6891424a\" (UID: \"fd03522b-4930-4c43-ae91-76bd6891424a\") " Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.946266 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd03522b-4930-4c43-ae91-76bd6891424a-kube-api-access-c9gsh" (OuterVolumeSpecName: "kube-api-access-c9gsh") pod "fd03522b-4930-4c43-ae91-76bd6891424a" (UID: "fd03522b-4930-4c43-ae91-76bd6891424a"). InnerVolumeSpecName "kube-api-access-c9gsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.971507 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-inventory" (OuterVolumeSpecName: "inventory") pod "fd03522b-4930-4c43-ae91-76bd6891424a" (UID: "fd03522b-4930-4c43-ae91-76bd6891424a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:25:01 crc kubenswrapper[4860]: I1014 15:25:01.975200 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fd03522b-4930-4c43-ae91-76bd6891424a" (UID: "fd03522b-4930-4c43-ae91-76bd6891424a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:25:02 crc kubenswrapper[4860]: I1014 15:25:02.041738 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:02 crc kubenswrapper[4860]: I1014 15:25:02.041779 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9gsh\" (UniqueName: \"kubernetes.io/projected/fd03522b-4930-4c43-ae91-76bd6891424a-kube-api-access-c9gsh\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:02 crc kubenswrapper[4860]: I1014 15:25:02.041791 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd03522b-4930-4c43-ae91-76bd6891424a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:02 crc kubenswrapper[4860]: I1014 15:25:02.422746 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" event={"ID":"fd03522b-4930-4c43-ae91-76bd6891424a","Type":"ContainerDied","Data":"b59a9a4eef9051aba08e9db77758fdc25f7661b1733e1284d683cfedfbaba885"} Oct 14 15:25:02 crc kubenswrapper[4860]: I1014 15:25:02.423134 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b59a9a4eef9051aba08e9db77758fdc25f7661b1733e1284d683cfedfbaba885" Oct 14 15:25:02 crc kubenswrapper[4860]: I1014 15:25:02.422796 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6" Oct 14 15:25:08 crc kubenswrapper[4860]: I1014 15:25:08.238643 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:25:08 crc kubenswrapper[4860]: I1014 15:25:08.409346 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:25:08 crc kubenswrapper[4860]: I1014 15:25:08.501772 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sb7kt"] Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.029668 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk"] Oct 14 15:25:09 crc kubenswrapper[4860]: E1014 15:25:09.030857 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd03522b-4930-4c43-ae91-76bd6891424a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.030955 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd03522b-4930-4c43-ae91-76bd6891424a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.031284 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd03522b-4930-4c43-ae91-76bd6891424a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.032228 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.035893 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.036297 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.036429 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.038884 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.079054 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk"] Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.097172 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lxjv\" (UniqueName: \"kubernetes.io/projected/8815aac7-80df-436c-ad49-c49907b6ed3c-kube-api-access-8lxjv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.097493 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.097648 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.199646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.199710 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.200365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lxjv\" (UniqueName: \"kubernetes.io/projected/8815aac7-80df-436c-ad49-c49907b6ed3c-kube-api-access-8lxjv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.218156 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.219098 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.221401 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lxjv\" (UniqueName: \"kubernetes.io/projected/8815aac7-80df-436c-ad49-c49907b6ed3c-kube-api-access-8lxjv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.351483 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.479991 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sb7kt" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" containerID="cri-o://51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e" gracePeriod=2 Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.878982 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.913960 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gt99\" (UniqueName: \"kubernetes.io/projected/17a750b8-880b-4623-8629-ca48c942292c-kube-api-access-9gt99\") pod \"17a750b8-880b-4623-8629-ca48c942292c\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.914205 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-catalog-content\") pod \"17a750b8-880b-4623-8629-ca48c942292c\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.914248 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-utilities\") pod \"17a750b8-880b-4623-8629-ca48c942292c\" (UID: \"17a750b8-880b-4623-8629-ca48c942292c\") " Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.916619 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-utilities" (OuterVolumeSpecName: "utilities") pod "17a750b8-880b-4623-8629-ca48c942292c" (UID: "17a750b8-880b-4623-8629-ca48c942292c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.920972 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a750b8-880b-4623-8629-ca48c942292c-kube-api-access-9gt99" (OuterVolumeSpecName: "kube-api-access-9gt99") pod "17a750b8-880b-4623-8629-ca48c942292c" (UID: "17a750b8-880b-4623-8629-ca48c942292c"). InnerVolumeSpecName "kube-api-access-9gt99". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:25:09 crc kubenswrapper[4860]: I1014 15:25:09.939307 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk"] Oct 14 15:25:09 crc kubenswrapper[4860]: W1014 15:25:09.940329 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8815aac7_80df_436c_ad49_c49907b6ed3c.slice/crio-8283609e8970c9cd88527d94c5d9af0d60339609abdb77adb0339f05812e9a6b WatchSource:0}: Error finding container 8283609e8970c9cd88527d94c5d9af0d60339609abdb77adb0339f05812e9a6b: Status 404 returned error can't find the container with id 8283609e8970c9cd88527d94c5d9af0d60339609abdb77adb0339f05812e9a6b Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.015341 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17a750b8-880b-4623-8629-ca48c942292c" (UID: "17a750b8-880b-4623-8629-ca48c942292c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.015741 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gt99\" (UniqueName: \"kubernetes.io/projected/17a750b8-880b-4623-8629-ca48c942292c-kube-api-access-9gt99\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.015843 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.015931 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a750b8-880b-4623-8629-ca48c942292c-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.491632 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" event={"ID":"8815aac7-80df-436c-ad49-c49907b6ed3c","Type":"ContainerStarted","Data":"8283609e8970c9cd88527d94c5d9af0d60339609abdb77adb0339f05812e9a6b"} Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.494851 4860 generic.go:334] "Generic (PLEG): container finished" podID="17a750b8-880b-4623-8629-ca48c942292c" containerID="51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e" exitCode=0 Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.494892 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerDied","Data":"51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e"} Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.494904 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb7kt" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.494920 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb7kt" event={"ID":"17a750b8-880b-4623-8629-ca48c942292c","Type":"ContainerDied","Data":"ce3fe29758a5faca48dc98fca05032146dcb73dcad6d1266903e522c3c55e814"} Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.494941 4860 scope.go:117] "RemoveContainer" containerID="51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.517982 4860 scope.go:117] "RemoveContainer" containerID="aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.532093 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sb7kt"] Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.540454 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sb7kt"] Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.550960 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.563495 4860 scope.go:117] "RemoveContainer" containerID="6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.607058 4860 scope.go:117] "RemoveContainer" containerID="51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.607740 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:25:10 crc kubenswrapper[4860]: E1014 15:25:10.607806 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e\": container with ID starting with 51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e not found: ID does not exist" containerID="51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.607840 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e"} err="failed to get container status \"51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e\": rpc error: code = NotFound desc = could not find container \"51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e\": container with ID starting with 51178a860b1d5d2552b74e70669595304c13e57bcafebb23f66b369fef3dff7e not found: ID does not exist" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.608141 4860 scope.go:117] "RemoveContainer" containerID="aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6" Oct 14 15:25:10 crc kubenswrapper[4860]: E1014 15:25:10.608544 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6\": container with ID starting with aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6 not found: ID does not exist" containerID="aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.608576 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6"} err="failed to get container status \"aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6\": rpc error: code = NotFound desc = could not find container \"aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6\": container with ID starting with aa260d0b8a62a7ca308f5c3dfa2512f3e2c617319f44940cf4471551f2d970b6 not found: ID does not exist" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.608593 4860 scope.go:117] "RemoveContainer" containerID="6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01" Oct 14 15:25:10 crc kubenswrapper[4860]: E1014 15:25:10.608984 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01\": container with ID starting with 6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01 not found: ID does not exist" containerID="6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01" Oct 14 15:25:10 crc kubenswrapper[4860]: I1014 15:25:10.609017 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01"} err="failed to get container status \"6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01\": rpc error: code = NotFound desc = could not find container \"6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01\": container with ID starting with 6410af40dc082c99fd0d6bdb79205b74b4fe3c22787d6790c4ee2fae0cc98e01 not found: ID does not exist" Oct 14 15:25:11 crc kubenswrapper[4860]: I1014 15:25:11.070427 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a750b8-880b-4623-8629-ca48c942292c" path="/var/lib/kubelet/pods/17a750b8-880b-4623-8629-ca48c942292c/volumes" Oct 14 15:25:11 crc kubenswrapper[4860]: I1014 15:25:11.506608 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" event={"ID":"8815aac7-80df-436c-ad49-c49907b6ed3c","Type":"ContainerStarted","Data":"9a571ff5133a95a9d285582583a3bc10c846e51681170294d9cf4a02e0a808f1"} Oct 14 15:25:11 crc kubenswrapper[4860]: I1014 15:25:11.523173 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" podStartSLOduration=2.189117192 podStartE2EDuration="2.523153798s" podCreationTimestamp="2025-10-14 15:25:09 +0000 UTC" firstStartedPulling="2025-10-14 15:25:09.942593357 +0000 UTC m=+2171.529376806" lastFinishedPulling="2025-10-14 15:25:10.276629963 +0000 UTC m=+2171.863413412" observedRunningTime="2025-10-14 15:25:11.519687714 +0000 UTC m=+2173.106471183" watchObservedRunningTime="2025-10-14 15:25:11.523153798 +0000 UTC m=+2173.109937247" Oct 14 15:25:12 crc kubenswrapper[4860]: I1014 15:25:12.906630 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bx9t7"] Oct 14 15:25:12 crc kubenswrapper[4860]: I1014 15:25:12.907214 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bx9t7" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="registry-server" containerID="cri-o://3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b" gracePeriod=2 Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.384101 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.476885 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-utilities\") pod \"317fb3d6-ef8d-4085-8e9d-444d045b211b\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.477008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-catalog-content\") pod \"317fb3d6-ef8d-4085-8e9d-444d045b211b\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.477114 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l54wp\" (UniqueName: \"kubernetes.io/projected/317fb3d6-ef8d-4085-8e9d-444d045b211b-kube-api-access-l54wp\") pod \"317fb3d6-ef8d-4085-8e9d-444d045b211b\" (UID: \"317fb3d6-ef8d-4085-8e9d-444d045b211b\") " Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.478413 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-utilities" (OuterVolumeSpecName: "utilities") pod "317fb3d6-ef8d-4085-8e9d-444d045b211b" (UID: "317fb3d6-ef8d-4085-8e9d-444d045b211b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.486382 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317fb3d6-ef8d-4085-8e9d-444d045b211b-kube-api-access-l54wp" (OuterVolumeSpecName: "kube-api-access-l54wp") pod "317fb3d6-ef8d-4085-8e9d-444d045b211b" (UID: "317fb3d6-ef8d-4085-8e9d-444d045b211b"). InnerVolumeSpecName "kube-api-access-l54wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.524758 4860 generic.go:334] "Generic (PLEG): container finished" podID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerID="3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b" exitCode=0 Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.524803 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerDied","Data":"3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b"} Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.524828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bx9t7" event={"ID":"317fb3d6-ef8d-4085-8e9d-444d045b211b","Type":"ContainerDied","Data":"2712a2a628e70d566ce7200b1754bc1b0604e4d13aa0d1ddcffc5cb11104e001"} Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.524843 4860 scope.go:117] "RemoveContainer" containerID="3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.524977 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bx9t7" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.527863 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "317fb3d6-ef8d-4085-8e9d-444d045b211b" (UID: "317fb3d6-ef8d-4085-8e9d-444d045b211b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.548428 4860 scope.go:117] "RemoveContainer" containerID="4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.575418 4860 scope.go:117] "RemoveContainer" containerID="6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.579309 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.579350 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317fb3d6-ef8d-4085-8e9d-444d045b211b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.579364 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l54wp\" (UniqueName: \"kubernetes.io/projected/317fb3d6-ef8d-4085-8e9d-444d045b211b-kube-api-access-l54wp\") on node \"crc\" DevicePath \"\"" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.619310 4860 scope.go:117] "RemoveContainer" containerID="3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b" Oct 14 15:25:13 crc kubenswrapper[4860]: E1014 15:25:13.619736 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b\": container with ID starting with 3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b not found: ID does not exist" containerID="3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.619789 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b"} err="failed to get container status \"3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b\": rpc error: code = NotFound desc = could not find container \"3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b\": container with ID starting with 3de62dc3ccedd18173cf5b2355cd3c8944e5a814d800e2b4153e11ceea33359b not found: ID does not exist" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.619832 4860 scope.go:117] "RemoveContainer" containerID="4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1" Oct 14 15:25:13 crc kubenswrapper[4860]: E1014 15:25:13.620121 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1\": container with ID starting with 4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1 not found: ID does not exist" containerID="4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.620147 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1"} err="failed to get container status \"4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1\": rpc error: code = NotFound desc = could not find container \"4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1\": container with ID starting with 4f1eb6ab885ae03b552364efae4debcd298e2990234b94715406077a64c9e3a1 not found: ID does not exist" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.620163 4860 scope.go:117] "RemoveContainer" containerID="6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e" Oct 14 15:25:13 crc kubenswrapper[4860]: E1014 15:25:13.620390 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e\": container with ID starting with 6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e not found: ID does not exist" containerID="6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.620411 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e"} err="failed to get container status \"6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e\": rpc error: code = NotFound desc = could not find container \"6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e\": container with ID starting with 6a66dffd05beb6dceb3e0483367a452fd9ba93ee48462e6e2daafb054b5a465e not found: ID does not exist" Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.869159 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bx9t7"] Oct 14 15:25:13 crc kubenswrapper[4860]: I1014 15:25:13.877217 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bx9t7"] Oct 14 15:25:15 crc kubenswrapper[4860]: I1014 15:25:15.072524 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" path="/var/lib/kubelet/pods/317fb3d6-ef8d-4085-8e9d-444d045b211b/volumes" Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.245657 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.246284 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.246342 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.247166 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.247212 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" gracePeriod=600 Oct 14 15:25:29 crc kubenswrapper[4860]: E1014 15:25:29.371481 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.664947 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" exitCode=0 Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.665001 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484"} Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.665042 4860 scope.go:117] "RemoveContainer" containerID="88d96a64a7082d356ceb1b7aa3d1e1d3f5289d2d18f169a9ea68443f4df8c882" Oct 14 15:25:29 crc kubenswrapper[4860]: I1014 15:25:29.665952 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:25:29 crc kubenswrapper[4860]: E1014 15:25:29.666224 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:25:41 crc kubenswrapper[4860]: I1014 15:25:41.061768 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:25:41 crc kubenswrapper[4860]: E1014 15:25:41.062655 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:25:56 crc kubenswrapper[4860]: I1014 15:25:56.061496 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:25:56 crc kubenswrapper[4860]: E1014 15:25:56.062232 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:25:59 crc kubenswrapper[4860]: I1014 15:25:59.900589 4860 generic.go:334] "Generic (PLEG): container finished" podID="8815aac7-80df-436c-ad49-c49907b6ed3c" containerID="9a571ff5133a95a9d285582583a3bc10c846e51681170294d9cf4a02e0a808f1" exitCode=0 Oct 14 15:25:59 crc kubenswrapper[4860]: I1014 15:25:59.900690 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" event={"ID":"8815aac7-80df-436c-ad49-c49907b6ed3c","Type":"ContainerDied","Data":"9a571ff5133a95a9d285582583a3bc10c846e51681170294d9cf4a02e0a808f1"} Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.311519 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.484377 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-ssh-key\") pod \"8815aac7-80df-436c-ad49-c49907b6ed3c\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.484629 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-inventory\") pod \"8815aac7-80df-436c-ad49-c49907b6ed3c\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.484691 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lxjv\" (UniqueName: \"kubernetes.io/projected/8815aac7-80df-436c-ad49-c49907b6ed3c-kube-api-access-8lxjv\") pod \"8815aac7-80df-436c-ad49-c49907b6ed3c\" (UID: \"8815aac7-80df-436c-ad49-c49907b6ed3c\") " Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.490153 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8815aac7-80df-436c-ad49-c49907b6ed3c-kube-api-access-8lxjv" (OuterVolumeSpecName: "kube-api-access-8lxjv") pod "8815aac7-80df-436c-ad49-c49907b6ed3c" (UID: "8815aac7-80df-436c-ad49-c49907b6ed3c"). InnerVolumeSpecName "kube-api-access-8lxjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.512616 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-inventory" (OuterVolumeSpecName: "inventory") pod "8815aac7-80df-436c-ad49-c49907b6ed3c" (UID: "8815aac7-80df-436c-ad49-c49907b6ed3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.514342 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8815aac7-80df-436c-ad49-c49907b6ed3c" (UID: "8815aac7-80df-436c-ad49-c49907b6ed3c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.586443 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.586476 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8815aac7-80df-436c-ad49-c49907b6ed3c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.586488 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lxjv\" (UniqueName: \"kubernetes.io/projected/8815aac7-80df-436c-ad49-c49907b6ed3c-kube-api-access-8lxjv\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.920401 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" event={"ID":"8815aac7-80df-436c-ad49-c49907b6ed3c","Type":"ContainerDied","Data":"8283609e8970c9cd88527d94c5d9af0d60339609abdb77adb0339f05812e9a6b"} Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.920449 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8283609e8970c9cd88527d94c5d9af0d60339609abdb77adb0339f05812e9a6b" Oct 14 15:26:01 crc kubenswrapper[4860]: I1014 15:26:01.920482 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.077361 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-47tvx"] Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078398 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="extract-content" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078423 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="extract-content" Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078443 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8815aac7-80df-436c-ad49-c49907b6ed3c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078455 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8815aac7-80df-436c-ad49-c49907b6ed3c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078475 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="extract-content" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078483 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="extract-content" Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078509 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="extract-utilities" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078524 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="extract-utilities" Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078543 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078552 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078582 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="registry-server" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078590 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="registry-server" Oct 14 15:26:02 crc kubenswrapper[4860]: E1014 15:26:02.078625 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="extract-utilities" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.078634 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="extract-utilities" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.083629 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8815aac7-80df-436c-ad49-c49907b6ed3c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.083734 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a750b8-880b-4623-8629-ca48c942292c" containerName="registry-server" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.083765 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="317fb3d6-ef8d-4085-8e9d-444d045b211b" containerName="registry-server" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.085042 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.094949 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.095270 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.095711 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.095982 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.096895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.097186 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.097354 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5pl\" (UniqueName: \"kubernetes.io/projected/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-kube-api-access-cx5pl\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.111654 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-47tvx"] Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.199363 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5pl\" (UniqueName: \"kubernetes.io/projected/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-kube-api-access-cx5pl\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.199481 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.199625 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.203629 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.208337 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.219425 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5pl\" (UniqueName: \"kubernetes.io/projected/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-kube-api-access-cx5pl\") pod \"ssh-known-hosts-edpm-deployment-47tvx\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.405825 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.919713 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-47tvx"] Oct 14 15:26:02 crc kubenswrapper[4860]: W1014 15:26:02.921649 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b6bfde_9f16_4803_8b4c_2aba73c9612f.slice/crio-ab78eaf5eb9dc7e6622247a6f33a4815a6f01ca4e08feb3fa7df666e906cad96 WatchSource:0}: Error finding container ab78eaf5eb9dc7e6622247a6f33a4815a6f01ca4e08feb3fa7df666e906cad96: Status 404 returned error can't find the container with id ab78eaf5eb9dc7e6622247a6f33a4815a6f01ca4e08feb3fa7df666e906cad96 Oct 14 15:26:02 crc kubenswrapper[4860]: I1014 15:26:02.930910 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" event={"ID":"b3b6bfde-9f16-4803-8b4c-2aba73c9612f","Type":"ContainerStarted","Data":"ab78eaf5eb9dc7e6622247a6f33a4815a6f01ca4e08feb3fa7df666e906cad96"} Oct 14 15:26:03 crc kubenswrapper[4860]: I1014 15:26:03.944356 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" event={"ID":"b3b6bfde-9f16-4803-8b4c-2aba73c9612f","Type":"ContainerStarted","Data":"5fe82ae22756e2df8aef14479c5c22c07f911b1b607b35133c8a9c9f5ed2a6e9"} Oct 14 15:26:03 crc kubenswrapper[4860]: I1014 15:26:03.967323 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" podStartSLOduration=1.829324287 podStartE2EDuration="1.967302285s" podCreationTimestamp="2025-10-14 15:26:02 +0000 UTC" firstStartedPulling="2025-10-14 15:26:02.925230773 +0000 UTC m=+2224.512014222" lastFinishedPulling="2025-10-14 15:26:03.063208771 +0000 UTC m=+2224.649992220" observedRunningTime="2025-10-14 15:26:03.963736388 +0000 UTC m=+2225.550519837" watchObservedRunningTime="2025-10-14 15:26:03.967302285 +0000 UTC m=+2225.554085734" Oct 14 15:26:07 crc kubenswrapper[4860]: I1014 15:26:07.062521 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:26:07 crc kubenswrapper[4860]: E1014 15:26:07.063258 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:26:11 crc kubenswrapper[4860]: I1014 15:26:11.005714 4860 generic.go:334] "Generic (PLEG): container finished" podID="b3b6bfde-9f16-4803-8b4c-2aba73c9612f" containerID="5fe82ae22756e2df8aef14479c5c22c07f911b1b607b35133c8a9c9f5ed2a6e9" exitCode=0 Oct 14 15:26:11 crc kubenswrapper[4860]: I1014 15:26:11.005805 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" event={"ID":"b3b6bfde-9f16-4803-8b4c-2aba73c9612f","Type":"ContainerDied","Data":"5fe82ae22756e2df8aef14479c5c22c07f911b1b607b35133c8a9c9f5ed2a6e9"} Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.423476 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.605983 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-ssh-key-openstack-edpm-ipam\") pod \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.606247 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-inventory-0\") pod \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.606315 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5pl\" (UniqueName: \"kubernetes.io/projected/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-kube-api-access-cx5pl\") pod \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\" (UID: \"b3b6bfde-9f16-4803-8b4c-2aba73c9612f\") " Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.617460 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-kube-api-access-cx5pl" (OuterVolumeSpecName: "kube-api-access-cx5pl") pod "b3b6bfde-9f16-4803-8b4c-2aba73c9612f" (UID: "b3b6bfde-9f16-4803-8b4c-2aba73c9612f"). InnerVolumeSpecName "kube-api-access-cx5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.634874 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b3b6bfde-9f16-4803-8b4c-2aba73c9612f" (UID: "b3b6bfde-9f16-4803-8b4c-2aba73c9612f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.637199 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3b6bfde-9f16-4803-8b4c-2aba73c9612f" (UID: "b3b6bfde-9f16-4803-8b4c-2aba73c9612f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.708636 4860 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.708673 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5pl\" (UniqueName: \"kubernetes.io/projected/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-kube-api-access-cx5pl\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:12 crc kubenswrapper[4860]: I1014 15:26:12.708684 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b6bfde-9f16-4803-8b4c-2aba73c9612f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.024877 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" event={"ID":"b3b6bfde-9f16-4803-8b4c-2aba73c9612f","Type":"ContainerDied","Data":"ab78eaf5eb9dc7e6622247a6f33a4815a6f01ca4e08feb3fa7df666e906cad96"} Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.024927 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab78eaf5eb9dc7e6622247a6f33a4815a6f01ca4e08feb3fa7df666e906cad96" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.024929 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-47tvx" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.102000 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4"] Oct 14 15:26:13 crc kubenswrapper[4860]: E1014 15:26:13.102385 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b6bfde-9f16-4803-8b4c-2aba73c9612f" containerName="ssh-known-hosts-edpm-deployment" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.102399 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b6bfde-9f16-4803-8b4c-2aba73c9612f" containerName="ssh-known-hosts-edpm-deployment" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.102600 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b6bfde-9f16-4803-8b4c-2aba73c9612f" containerName="ssh-known-hosts-edpm-deployment" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.103262 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.106526 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.106618 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.118148 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4"] Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.118762 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.118809 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.216214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.216624 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.216784 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/9af1a0e5-8c28-4be6-8906-f60775a83304-kube-api-access-584kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.318614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/9af1a0e5-8c28-4be6-8906-f60775a83304-kube-api-access-584kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.318872 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.318935 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.323665 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.323665 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.336295 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/9af1a0e5-8c28-4be6-8906-f60775a83304-kube-api-access-584kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4jpw4\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.456501 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:13 crc kubenswrapper[4860]: I1014 15:26:13.987233 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4"] Oct 14 15:26:14 crc kubenswrapper[4860]: I1014 15:26:14.041747 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" event={"ID":"9af1a0e5-8c28-4be6-8906-f60775a83304","Type":"ContainerStarted","Data":"cf1d9e9f8c8e66ff58ec3956a00e9a071756b0fdd68863f8cd0fe9d201ae0f00"} Oct 14 15:26:15 crc kubenswrapper[4860]: I1014 15:26:15.051132 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" event={"ID":"9af1a0e5-8c28-4be6-8906-f60775a83304","Type":"ContainerStarted","Data":"3462e86a9f2645edfa6a42dca78df8a5e9ece70d5faacc0638fe30d816750e86"} Oct 14 15:26:15 crc kubenswrapper[4860]: I1014 15:26:15.080754 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" podStartSLOduration=1.622446886 podStartE2EDuration="2.080735178s" podCreationTimestamp="2025-10-14 15:26:13 +0000 UTC" firstStartedPulling="2025-10-14 15:26:13.996319289 +0000 UTC m=+2235.583102738" lastFinishedPulling="2025-10-14 15:26:14.454607581 +0000 UTC m=+2236.041391030" observedRunningTime="2025-10-14 15:26:15.074853365 +0000 UTC m=+2236.661636814" watchObservedRunningTime="2025-10-14 15:26:15.080735178 +0000 UTC m=+2236.667518627" Oct 14 15:26:22 crc kubenswrapper[4860]: I1014 15:26:22.062221 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:26:22 crc kubenswrapper[4860]: E1014 15:26:22.062856 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:26:24 crc kubenswrapper[4860]: I1014 15:26:24.122936 4860 generic.go:334] "Generic (PLEG): container finished" podID="9af1a0e5-8c28-4be6-8906-f60775a83304" containerID="3462e86a9f2645edfa6a42dca78df8a5e9ece70d5faacc0638fe30d816750e86" exitCode=0 Oct 14 15:26:24 crc kubenswrapper[4860]: I1014 15:26:24.123264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" event={"ID":"9af1a0e5-8c28-4be6-8906-f60775a83304","Type":"ContainerDied","Data":"3462e86a9f2645edfa6a42dca78df8a5e9ece70d5faacc0638fe30d816750e86"} Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.500073 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.508346 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-inventory\") pod \"9af1a0e5-8c28-4be6-8906-f60775a83304\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.508402 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-ssh-key\") pod \"9af1a0e5-8c28-4be6-8906-f60775a83304\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.508438 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/9af1a0e5-8c28-4be6-8906-f60775a83304-kube-api-access-584kq\") pod \"9af1a0e5-8c28-4be6-8906-f60775a83304\" (UID: \"9af1a0e5-8c28-4be6-8906-f60775a83304\") " Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.515908 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af1a0e5-8c28-4be6-8906-f60775a83304-kube-api-access-584kq" (OuterVolumeSpecName: "kube-api-access-584kq") pod "9af1a0e5-8c28-4be6-8906-f60775a83304" (UID: "9af1a0e5-8c28-4be6-8906-f60775a83304"). InnerVolumeSpecName "kube-api-access-584kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.547169 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-inventory" (OuterVolumeSpecName: "inventory") pod "9af1a0e5-8c28-4be6-8906-f60775a83304" (UID: "9af1a0e5-8c28-4be6-8906-f60775a83304"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.547217 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9af1a0e5-8c28-4be6-8906-f60775a83304" (UID: "9af1a0e5-8c28-4be6-8906-f60775a83304"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.611677 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.611898 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af1a0e5-8c28-4be6-8906-f60775a83304-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:25 crc kubenswrapper[4860]: I1014 15:26:25.611964 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/9af1a0e5-8c28-4be6-8906-f60775a83304-kube-api-access-584kq\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.140499 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" event={"ID":"9af1a0e5-8c28-4be6-8906-f60775a83304","Type":"ContainerDied","Data":"cf1d9e9f8c8e66ff58ec3956a00e9a071756b0fdd68863f8cd0fe9d201ae0f00"} Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.140537 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf1d9e9f8c8e66ff58ec3956a00e9a071756b0fdd68863f8cd0fe9d201ae0f00" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.140982 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4jpw4" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.263617 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k"] Oct 14 15:26:26 crc kubenswrapper[4860]: E1014 15:26:26.264188 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af1a0e5-8c28-4be6-8906-f60775a83304" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.264214 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af1a0e5-8c28-4be6-8906-f60775a83304" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.264452 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af1a0e5-8c28-4be6-8906-f60775a83304" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.265311 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.269058 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.269435 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.273368 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.275797 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.284229 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k"] Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.331275 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.331375 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjjfc\" (UniqueName: \"kubernetes.io/projected/bf875e18-0a4b-4caf-85e0-fe7d96ace688-kube-api-access-bjjfc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.331434 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.433487 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.433579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjjfc\" (UniqueName: \"kubernetes.io/projected/bf875e18-0a4b-4caf-85e0-fe7d96ace688-kube-api-access-bjjfc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.433630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.437803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.439156 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.454253 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjjfc\" (UniqueName: \"kubernetes.io/projected/bf875e18-0a4b-4caf-85e0-fe7d96ace688-kube-api-access-bjjfc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:26 crc kubenswrapper[4860]: I1014 15:26:26.587967 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:27 crc kubenswrapper[4860]: I1014 15:26:27.112934 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k"] Oct 14 15:26:27 crc kubenswrapper[4860]: I1014 15:26:27.152797 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" event={"ID":"bf875e18-0a4b-4caf-85e0-fe7d96ace688","Type":"ContainerStarted","Data":"76b1b265cf7995f9b22867e5889b3f3f1adb3c5889e0b31f7ec8f4c19c50f954"} Oct 14 15:26:28 crc kubenswrapper[4860]: I1014 15:26:28.162418 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" event={"ID":"bf875e18-0a4b-4caf-85e0-fe7d96ace688","Type":"ContainerStarted","Data":"40fc1d976f0ea7efaa1d54b132641aafcad63d31c79a126cb2ccc5d8bb1ae06c"} Oct 14 15:26:33 crc kubenswrapper[4860]: I1014 15:26:33.062196 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:26:33 crc kubenswrapper[4860]: E1014 15:26:33.062767 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:26:38 crc kubenswrapper[4860]: I1014 15:26:38.248102 4860 generic.go:334] "Generic (PLEG): container finished" podID="bf875e18-0a4b-4caf-85e0-fe7d96ace688" containerID="40fc1d976f0ea7efaa1d54b132641aafcad63d31c79a126cb2ccc5d8bb1ae06c" exitCode=0 Oct 14 15:26:38 crc kubenswrapper[4860]: I1014 15:26:38.248231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" event={"ID":"bf875e18-0a4b-4caf-85e0-fe7d96ace688","Type":"ContainerDied","Data":"40fc1d976f0ea7efaa1d54b132641aafcad63d31c79a126cb2ccc5d8bb1ae06c"} Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.710474 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.816120 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-ssh-key\") pod \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.816216 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-inventory\") pod \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.816343 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjjfc\" (UniqueName: \"kubernetes.io/projected/bf875e18-0a4b-4caf-85e0-fe7d96ace688-kube-api-access-bjjfc\") pod \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\" (UID: \"bf875e18-0a4b-4caf-85e0-fe7d96ace688\") " Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.825576 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf875e18-0a4b-4caf-85e0-fe7d96ace688-kube-api-access-bjjfc" (OuterVolumeSpecName: "kube-api-access-bjjfc") pod "bf875e18-0a4b-4caf-85e0-fe7d96ace688" (UID: "bf875e18-0a4b-4caf-85e0-fe7d96ace688"). InnerVolumeSpecName "kube-api-access-bjjfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.866338 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-inventory" (OuterVolumeSpecName: "inventory") pod "bf875e18-0a4b-4caf-85e0-fe7d96ace688" (UID: "bf875e18-0a4b-4caf-85e0-fe7d96ace688"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.867316 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf875e18-0a4b-4caf-85e0-fe7d96ace688" (UID: "bf875e18-0a4b-4caf-85e0-fe7d96ace688"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.918773 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.918819 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf875e18-0a4b-4caf-85e0-fe7d96ace688-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:39 crc kubenswrapper[4860]: I1014 15:26:39.918831 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjjfc\" (UniqueName: \"kubernetes.io/projected/bf875e18-0a4b-4caf-85e0-fe7d96ace688-kube-api-access-bjjfc\") on node \"crc\" DevicePath \"\"" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.266493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" event={"ID":"bf875e18-0a4b-4caf-85e0-fe7d96ace688","Type":"ContainerDied","Data":"76b1b265cf7995f9b22867e5889b3f3f1adb3c5889e0b31f7ec8f4c19c50f954"} Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.266531 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b1b265cf7995f9b22867e5889b3f3f1adb3c5889e0b31f7ec8f4c19c50f954" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.266534 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.373284 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk"] Oct 14 15:26:40 crc kubenswrapper[4860]: E1014 15:26:40.373668 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf875e18-0a4b-4caf-85e0-fe7d96ace688" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.373692 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf875e18-0a4b-4caf-85e0-fe7d96ace688" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.373924 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf875e18-0a4b-4caf-85e0-fe7d96ace688" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.374612 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.377770 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.378218 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.378380 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.381270 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.381331 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.381429 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.381350 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.381286 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.407614 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk"] Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440024 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440146 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440238 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440285 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440762 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440862 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440889 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440913 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440935 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s2zb\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-kube-api-access-2s2zb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440963 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.440989 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.441057 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.441081 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.441099 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543724 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543795 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543857 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543894 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543944 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543965 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.543994 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544015 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s2zb\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-kube-api-access-2s2zb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544080 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544124 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544148 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544172 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.544203 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.551283 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.551642 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.551712 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.552901 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.554463 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.556469 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.556867 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.557792 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.558442 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.558668 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.559643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.564678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.564797 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.572394 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s2zb\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-kube-api-access-2s2zb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:40 crc kubenswrapper[4860]: I1014 15:26:40.706466 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:26:41 crc kubenswrapper[4860]: W1014 15:26:41.263490 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21809a83_1209_4a97_a550_1dfcccd04ec3.slice/crio-e283fdd5e83571cf07a01ab55e4ac8d92a97d521ef6c194a70318614556cd4d1 WatchSource:0}: Error finding container e283fdd5e83571cf07a01ab55e4ac8d92a97d521ef6c194a70318614556cd4d1: Status 404 returned error can't find the container with id e283fdd5e83571cf07a01ab55e4ac8d92a97d521ef6c194a70318614556cd4d1 Oct 14 15:26:41 crc kubenswrapper[4860]: I1014 15:26:41.264332 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk"] Oct 14 15:26:41 crc kubenswrapper[4860]: I1014 15:26:41.290602 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" event={"ID":"21809a83-1209-4a97-a550-1dfcccd04ec3","Type":"ContainerStarted","Data":"e283fdd5e83571cf07a01ab55e4ac8d92a97d521ef6c194a70318614556cd4d1"} Oct 14 15:26:42 crc kubenswrapper[4860]: I1014 15:26:42.310112 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" event={"ID":"21809a83-1209-4a97-a550-1dfcccd04ec3","Type":"ContainerStarted","Data":"86b2db6cdc564eb365cc74f8ff04c6a6e3d29ed546071cae34afe5d5d3d3ceea"} Oct 14 15:26:42 crc kubenswrapper[4860]: I1014 15:26:42.333669 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" podStartSLOduration=2.169626623 podStartE2EDuration="2.333650784s" podCreationTimestamp="2025-10-14 15:26:40 +0000 UTC" firstStartedPulling="2025-10-14 15:26:41.266353619 +0000 UTC m=+2262.853137068" lastFinishedPulling="2025-10-14 15:26:41.43037778 +0000 UTC m=+2263.017161229" observedRunningTime="2025-10-14 15:26:42.330980739 +0000 UTC m=+2263.917764188" watchObservedRunningTime="2025-10-14 15:26:42.333650784 +0000 UTC m=+2263.920434223" Oct 14 15:26:47 crc kubenswrapper[4860]: I1014 15:26:47.062130 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:26:47 crc kubenswrapper[4860]: E1014 15:26:47.062966 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:27:00 crc kubenswrapper[4860]: I1014 15:27:00.063220 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:27:00 crc kubenswrapper[4860]: E1014 15:27:00.064412 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:27:12 crc kubenswrapper[4860]: I1014 15:27:12.061376 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:27:12 crc kubenswrapper[4860]: E1014 15:27:12.062074 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:27:22 crc kubenswrapper[4860]: I1014 15:27:22.659643 4860 generic.go:334] "Generic (PLEG): container finished" podID="21809a83-1209-4a97-a550-1dfcccd04ec3" containerID="86b2db6cdc564eb365cc74f8ff04c6a6e3d29ed546071cae34afe5d5d3d3ceea" exitCode=0 Oct 14 15:27:22 crc kubenswrapper[4860]: I1014 15:27:22.659732 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" event={"ID":"21809a83-1209-4a97-a550-1dfcccd04ec3","Type":"ContainerDied","Data":"86b2db6cdc564eb365cc74f8ff04c6a6e3d29ed546071cae34afe5d5d3d3ceea"} Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.073917 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.117853 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s2zb\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-kube-api-access-2s2zb\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.117948 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-libvirt-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118007 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ssh-key\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118115 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-bootstrap-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118324 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118356 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118433 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-repo-setup-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118463 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-nova-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118536 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-inventory\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.118609 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-neutron-metadata-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.120453 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.120511 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ovn-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.120552 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-telemetry-combined-ca-bundle\") pod \"21809a83-1209-4a97-a550-1dfcccd04ec3\" (UID: \"21809a83-1209-4a97-a550-1dfcccd04ec3\") " Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.126710 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.126810 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.129368 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.129589 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.133636 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.134641 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.134998 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-kube-api-access-2s2zb" (OuterVolumeSpecName: "kube-api-access-2s2zb") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "kube-api-access-2s2zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.135841 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.138151 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.138537 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.138654 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.141968 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.154834 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.158224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-inventory" (OuterVolumeSpecName: "inventory") pod "21809a83-1209-4a97-a550-1dfcccd04ec3" (UID: "21809a83-1209-4a97-a550-1dfcccd04ec3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222850 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222895 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222910 4860 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222922 4860 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222934 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222944 4860 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222957 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222971 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222983 4860 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.222996 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s2zb\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-kube-api-access-2s2zb\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.223009 4860 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.223022 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/21809a83-1209-4a97-a550-1dfcccd04ec3-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.223047 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.223057 4860 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21809a83-1209-4a97-a550-1dfcccd04ec3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.683960 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" event={"ID":"21809a83-1209-4a97-a550-1dfcccd04ec3","Type":"ContainerDied","Data":"e283fdd5e83571cf07a01ab55e4ac8d92a97d521ef6c194a70318614556cd4d1"} Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.683997 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e283fdd5e83571cf07a01ab55e4ac8d92a97d521ef6c194a70318614556cd4d1" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.684005 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.791669 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w"] Oct 14 15:27:24 crc kubenswrapper[4860]: E1014 15:27:24.793066 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21809a83-1209-4a97-a550-1dfcccd04ec3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.793093 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="21809a83-1209-4a97-a550-1dfcccd04ec3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.793362 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="21809a83-1209-4a97-a550-1dfcccd04ec3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.794202 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.795821 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.800931 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.801161 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.801230 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w"] Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.801308 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.801579 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.939784 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.939877 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.939976 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.940064 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg44x\" (UniqueName: \"kubernetes.io/projected/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-kube-api-access-gg44x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:24 crc kubenswrapper[4860]: I1014 15:27:24.940101 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.041565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg44x\" (UniqueName: \"kubernetes.io/projected/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-kube-api-access-gg44x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.041646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.041712 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.041760 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.041856 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.043387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.047134 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.049082 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.054958 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.058425 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg44x\" (UniqueName: \"kubernetes.io/projected/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-kube-api-access-gg44x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jm89w\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.112845 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.620211 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w"] Oct 14 15:27:25 crc kubenswrapper[4860]: I1014 15:27:25.692639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" event={"ID":"758f6aec-34fc-48fc-a6bb-f6ac287a02d0","Type":"ContainerStarted","Data":"5cd8e866ee61849b71cf0275c38cbebbf4075de54fb4cd81041faf3a67b872fe"} Oct 14 15:27:26 crc kubenswrapper[4860]: I1014 15:27:26.061650 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:27:26 crc kubenswrapper[4860]: E1014 15:27:26.062088 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:27:26 crc kubenswrapper[4860]: I1014 15:27:26.704147 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" event={"ID":"758f6aec-34fc-48fc-a6bb-f6ac287a02d0","Type":"ContainerStarted","Data":"d408a8ae7df326e886db3e199484b0060ef1769890193cdf4fec0e23661a4cca"} Oct 14 15:27:26 crc kubenswrapper[4860]: I1014 15:27:26.728560 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" podStartSLOduration=2.567399107 podStartE2EDuration="2.728541848s" podCreationTimestamp="2025-10-14 15:27:24 +0000 UTC" firstStartedPulling="2025-10-14 15:27:25.627395004 +0000 UTC m=+2307.214178463" lastFinishedPulling="2025-10-14 15:27:25.788537755 +0000 UTC m=+2307.375321204" observedRunningTime="2025-10-14 15:27:26.722604474 +0000 UTC m=+2308.309387923" watchObservedRunningTime="2025-10-14 15:27:26.728541848 +0000 UTC m=+2308.315325297" Oct 14 15:27:41 crc kubenswrapper[4860]: I1014 15:27:41.061168 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:27:41 crc kubenswrapper[4860]: E1014 15:27:41.061935 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:27:55 crc kubenswrapper[4860]: I1014 15:27:55.064829 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:27:55 crc kubenswrapper[4860]: E1014 15:27:55.066341 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:28:10 crc kubenswrapper[4860]: I1014 15:28:10.062823 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:28:10 crc kubenswrapper[4860]: E1014 15:28:10.063419 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:28:21 crc kubenswrapper[4860]: I1014 15:28:21.061547 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:28:21 crc kubenswrapper[4860]: E1014 15:28:21.062498 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:28:33 crc kubenswrapper[4860]: I1014 15:28:33.061787 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:28:33 crc kubenswrapper[4860]: E1014 15:28:33.062651 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:28:33 crc kubenswrapper[4860]: I1014 15:28:33.288279 4860 generic.go:334] "Generic (PLEG): container finished" podID="758f6aec-34fc-48fc-a6bb-f6ac287a02d0" containerID="d408a8ae7df326e886db3e199484b0060ef1769890193cdf4fec0e23661a4cca" exitCode=0 Oct 14 15:28:33 crc kubenswrapper[4860]: I1014 15:28:33.288325 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" event={"ID":"758f6aec-34fc-48fc-a6bb-f6ac287a02d0","Type":"ContainerDied","Data":"d408a8ae7df326e886db3e199484b0060ef1769890193cdf4fec0e23661a4cca"} Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.690878 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.801113 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovn-combined-ca-bundle\") pod \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.801270 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg44x\" (UniqueName: \"kubernetes.io/projected/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-kube-api-access-gg44x\") pod \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.801352 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovncontroller-config-0\") pod \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.801437 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-inventory\") pod \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.801607 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ssh-key\") pod \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\" (UID: \"758f6aec-34fc-48fc-a6bb-f6ac287a02d0\") " Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.807526 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "758f6aec-34fc-48fc-a6bb-f6ac287a02d0" (UID: "758f6aec-34fc-48fc-a6bb-f6ac287a02d0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.807587 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-kube-api-access-gg44x" (OuterVolumeSpecName: "kube-api-access-gg44x") pod "758f6aec-34fc-48fc-a6bb-f6ac287a02d0" (UID: "758f6aec-34fc-48fc-a6bb-f6ac287a02d0"). InnerVolumeSpecName "kube-api-access-gg44x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.826856 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "758f6aec-34fc-48fc-a6bb-f6ac287a02d0" (UID: "758f6aec-34fc-48fc-a6bb-f6ac287a02d0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.829556 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "758f6aec-34fc-48fc-a6bb-f6ac287a02d0" (UID: "758f6aec-34fc-48fc-a6bb-f6ac287a02d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.830543 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-inventory" (OuterVolumeSpecName: "inventory") pod "758f6aec-34fc-48fc-a6bb-f6ac287a02d0" (UID: "758f6aec-34fc-48fc-a6bb-f6ac287a02d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.904189 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.904230 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.904240 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.904251 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg44x\" (UniqueName: \"kubernetes.io/projected/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-kube-api-access-gg44x\") on node \"crc\" DevicePath \"\"" Oct 14 15:28:34 crc kubenswrapper[4860]: I1014 15:28:34.904260 4860 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/758f6aec-34fc-48fc-a6bb-f6ac287a02d0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.313960 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" event={"ID":"758f6aec-34fc-48fc-a6bb-f6ac287a02d0","Type":"ContainerDied","Data":"5cd8e866ee61849b71cf0275c38cbebbf4075de54fb4cd81041faf3a67b872fe"} Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.314006 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd8e866ee61849b71cf0275c38cbebbf4075de54fb4cd81041faf3a67b872fe" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.314359 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jm89w" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.385691 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl"] Oct 14 15:28:35 crc kubenswrapper[4860]: E1014 15:28:35.386119 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758f6aec-34fc-48fc-a6bb-f6ac287a02d0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.386138 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="758f6aec-34fc-48fc-a6bb-f6ac287a02d0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.386361 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="758f6aec-34fc-48fc-a6bb-f6ac287a02d0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.388064 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.396085 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.396924 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.397294 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.401050 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.401920 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.402906 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.409724 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl"] Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.422112 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.422241 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.422481 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.422583 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.422601 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz28z\" (UniqueName: \"kubernetes.io/projected/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-kube-api-access-wz28z\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.422796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.525344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz28z\" (UniqueName: \"kubernetes.io/projected/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-kube-api-access-wz28z\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.525412 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.525520 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.525570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.525622 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.525721 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.531191 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.532015 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.532665 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.543885 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.553623 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz28z\" (UniqueName: \"kubernetes.io/projected/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-kube-api-access-wz28z\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.554279 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:35 crc kubenswrapper[4860]: I1014 15:28:35.705849 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:28:36 crc kubenswrapper[4860]: I1014 15:28:36.282911 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl"] Oct 14 15:28:36 crc kubenswrapper[4860]: I1014 15:28:36.322903 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" event={"ID":"3601c2b8-7185-42fa-bbe1-b0e6b1e07332","Type":"ContainerStarted","Data":"81365d94122aec9dd8476b03a807d7f728835d16ac3bf8a4cb009c48fce06a60"} Oct 14 15:28:37 crc kubenswrapper[4860]: I1014 15:28:37.333492 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" event={"ID":"3601c2b8-7185-42fa-bbe1-b0e6b1e07332","Type":"ContainerStarted","Data":"8657b5304755ec7bc28ec2c93ba522b51ddb428c30a26428648ec0a43925a3d5"} Oct 14 15:28:37 crc kubenswrapper[4860]: I1014 15:28:37.350090 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" podStartSLOduration=2.196213889 podStartE2EDuration="2.350070253s" podCreationTimestamp="2025-10-14 15:28:35 +0000 UTC" firstStartedPulling="2025-10-14 15:28:36.285735074 +0000 UTC m=+2377.872518523" lastFinishedPulling="2025-10-14 15:28:36.439591438 +0000 UTC m=+2378.026374887" observedRunningTime="2025-10-14 15:28:37.347991682 +0000 UTC m=+2378.934775131" watchObservedRunningTime="2025-10-14 15:28:37.350070253 +0000 UTC m=+2378.936853702" Oct 14 15:28:48 crc kubenswrapper[4860]: I1014 15:28:48.061543 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:28:48 crc kubenswrapper[4860]: E1014 15:28:48.063237 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:29:02 crc kubenswrapper[4860]: I1014 15:29:02.061619 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:29:02 crc kubenswrapper[4860]: E1014 15:29:02.062558 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:29:16 crc kubenswrapper[4860]: I1014 15:29:16.062399 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:29:16 crc kubenswrapper[4860]: E1014 15:29:16.063363 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:29:26 crc kubenswrapper[4860]: I1014 15:29:26.756457 4860 generic.go:334] "Generic (PLEG): container finished" podID="3601c2b8-7185-42fa-bbe1-b0e6b1e07332" containerID="8657b5304755ec7bc28ec2c93ba522b51ddb428c30a26428648ec0a43925a3d5" exitCode=0 Oct 14 15:29:26 crc kubenswrapper[4860]: I1014 15:29:26.756509 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" event={"ID":"3601c2b8-7185-42fa-bbe1-b0e6b1e07332","Type":"ContainerDied","Data":"8657b5304755ec7bc28ec2c93ba522b51ddb428c30a26428648ec0a43925a3d5"} Oct 14 15:29:27 crc kubenswrapper[4860]: I1014 15:29:27.062614 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:29:27 crc kubenswrapper[4860]: E1014 15:29:27.062960 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.228691 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.251621 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.251688 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-inventory\") pod \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.251726 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-metadata-combined-ca-bundle\") pod \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.251799 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-ssh-key\") pod \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.263765 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3601c2b8-7185-42fa-bbe1-b0e6b1e07332" (UID: "3601c2b8-7185-42fa-bbe1-b0e6b1e07332"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.297222 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3601c2b8-7185-42fa-bbe1-b0e6b1e07332" (UID: "3601c2b8-7185-42fa-bbe1-b0e6b1e07332"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.302114 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3601c2b8-7185-42fa-bbe1-b0e6b1e07332" (UID: "3601c2b8-7185-42fa-bbe1-b0e6b1e07332"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.306579 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-inventory" (OuterVolumeSpecName: "inventory") pod "3601c2b8-7185-42fa-bbe1-b0e6b1e07332" (UID: "3601c2b8-7185-42fa-bbe1-b0e6b1e07332"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.353348 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz28z\" (UniqueName: \"kubernetes.io/projected/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-kube-api-access-wz28z\") pod \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.353613 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-nova-metadata-neutron-config-0\") pod \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\" (UID: \"3601c2b8-7185-42fa-bbe1-b0e6b1e07332\") " Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.353893 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.353912 4860 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.353923 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.353932 4860 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.356993 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-kube-api-access-wz28z" (OuterVolumeSpecName: "kube-api-access-wz28z") pod "3601c2b8-7185-42fa-bbe1-b0e6b1e07332" (UID: "3601c2b8-7185-42fa-bbe1-b0e6b1e07332"). InnerVolumeSpecName "kube-api-access-wz28z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.378795 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3601c2b8-7185-42fa-bbe1-b0e6b1e07332" (UID: "3601c2b8-7185-42fa-bbe1-b0e6b1e07332"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.455642 4860 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.455689 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz28z\" (UniqueName: \"kubernetes.io/projected/3601c2b8-7185-42fa-bbe1-b0e6b1e07332-kube-api-access-wz28z\") on node \"crc\" DevicePath \"\"" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.780110 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" event={"ID":"3601c2b8-7185-42fa-bbe1-b0e6b1e07332","Type":"ContainerDied","Data":"81365d94122aec9dd8476b03a807d7f728835d16ac3bf8a4cb009c48fce06a60"} Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.780474 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81365d94122aec9dd8476b03a807d7f728835d16ac3bf8a4cb009c48fce06a60" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.780389 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.965127 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb"] Oct 14 15:29:28 crc kubenswrapper[4860]: E1014 15:29:28.965536 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3601c2b8-7185-42fa-bbe1-b0e6b1e07332" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.965553 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3601c2b8-7185-42fa-bbe1-b0e6b1e07332" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.965743 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3601c2b8-7185-42fa-bbe1-b0e6b1e07332" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.966393 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.968504 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.968545 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.969295 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.969429 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.969562 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:29:28 crc kubenswrapper[4860]: I1014 15:29:28.978915 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb"] Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.168739 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.169112 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.169386 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.169442 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.169660 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrm4s\" (UniqueName: \"kubernetes.io/projected/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-kube-api-access-hrm4s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.271979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.272166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.272202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.272250 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.272326 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrm4s\" (UniqueName: \"kubernetes.io/projected/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-kube-api-access-hrm4s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.276787 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.276797 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.277214 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.284554 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.293152 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrm4s\" (UniqueName: \"kubernetes.io/projected/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-kube-api-access-hrm4s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:29 crc kubenswrapper[4860]: I1014 15:29:29.586197 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:29:30 crc kubenswrapper[4860]: I1014 15:29:30.221452 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb"] Oct 14 15:29:30 crc kubenswrapper[4860]: I1014 15:29:30.805707 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" event={"ID":"ad612cd6-7c9d-44c4-aa1e-33055de4eee6","Type":"ContainerStarted","Data":"5abdd1639cf70132e42b60f76c93f18e5cf1b5f23292d2c7cbae7468fc0db405"} Oct 14 15:29:30 crc kubenswrapper[4860]: I1014 15:29:30.806012 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" event={"ID":"ad612cd6-7c9d-44c4-aa1e-33055de4eee6","Type":"ContainerStarted","Data":"35a02a964e4c609b835dedfb9ea135008c14813cbda4ccf61058028b0d58dbc5"} Oct 14 15:29:30 crc kubenswrapper[4860]: I1014 15:29:30.830632 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" podStartSLOduration=2.683633605 podStartE2EDuration="2.830613652s" podCreationTimestamp="2025-10-14 15:29:28 +0000 UTC" firstStartedPulling="2025-10-14 15:29:30.225321424 +0000 UTC m=+2431.812104873" lastFinishedPulling="2025-10-14 15:29:30.372301461 +0000 UTC m=+2431.959084920" observedRunningTime="2025-10-14 15:29:30.822389193 +0000 UTC m=+2432.409172642" watchObservedRunningTime="2025-10-14 15:29:30.830613652 +0000 UTC m=+2432.417397101" Oct 14 15:29:38 crc kubenswrapper[4860]: I1014 15:29:38.061864 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:29:38 crc kubenswrapper[4860]: E1014 15:29:38.062728 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:29:52 crc kubenswrapper[4860]: I1014 15:29:52.062119 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:29:52 crc kubenswrapper[4860]: E1014 15:29:52.062918 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.164720 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t"] Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.166927 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.169316 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.171244 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.195110 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t"] Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.352889 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-secret-volume\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.352995 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zzg\" (UniqueName: \"kubernetes.io/projected/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-kube-api-access-g6zzg\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.353074 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-config-volume\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.455058 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6zzg\" (UniqueName: \"kubernetes.io/projected/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-kube-api-access-g6zzg\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.455163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-config-volume\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.455374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-secret-volume\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.456253 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-config-volume\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.461564 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-secret-volume\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.474073 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6zzg\" (UniqueName: \"kubernetes.io/projected/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-kube-api-access-g6zzg\") pod \"collect-profiles-29340930-7qf9t\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.493280 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:00 crc kubenswrapper[4860]: I1014 15:30:00.984171 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t"] Oct 14 15:30:01 crc kubenswrapper[4860]: I1014 15:30:01.036007 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" event={"ID":"d12e0b61-a53a-4f81-b2e2-8ae3efb42288","Type":"ContainerStarted","Data":"76e2a775dd1cbe10e77379f68140b6a2c4244f29811b77c018ef20b3d699f282"} Oct 14 15:30:02 crc kubenswrapper[4860]: I1014 15:30:02.046717 4860 generic.go:334] "Generic (PLEG): container finished" podID="d12e0b61-a53a-4f81-b2e2-8ae3efb42288" containerID="d76e098bda4daace3fbab28c3cb9265ac2f276c5711911dbc1d4f6de7cf20a9c" exitCode=0 Oct 14 15:30:02 crc kubenswrapper[4860]: I1014 15:30:02.046790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" event={"ID":"d12e0b61-a53a-4f81-b2e2-8ae3efb42288","Type":"ContainerDied","Data":"d76e098bda4daace3fbab28c3cb9265ac2f276c5711911dbc1d4f6de7cf20a9c"} Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.426108 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.621852 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6zzg\" (UniqueName: \"kubernetes.io/projected/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-kube-api-access-g6zzg\") pod \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.622082 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-config-volume\") pod \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.622104 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-secret-volume\") pod \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\" (UID: \"d12e0b61-a53a-4f81-b2e2-8ae3efb42288\") " Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.623183 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-config-volume" (OuterVolumeSpecName: "config-volume") pod "d12e0b61-a53a-4f81-b2e2-8ae3efb42288" (UID: "d12e0b61-a53a-4f81-b2e2-8ae3efb42288"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.629903 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d12e0b61-a53a-4f81-b2e2-8ae3efb42288" (UID: "d12e0b61-a53a-4f81-b2e2-8ae3efb42288"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.637900 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-kube-api-access-g6zzg" (OuterVolumeSpecName: "kube-api-access-g6zzg") pod "d12e0b61-a53a-4f81-b2e2-8ae3efb42288" (UID: "d12e0b61-a53a-4f81-b2e2-8ae3efb42288"). InnerVolumeSpecName "kube-api-access-g6zzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.725523 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.725564 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:30:03 crc kubenswrapper[4860]: I1014 15:30:03.725576 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6zzg\" (UniqueName: \"kubernetes.io/projected/d12e0b61-a53a-4f81-b2e2-8ae3efb42288-kube-api-access-g6zzg\") on node \"crc\" DevicePath \"\"" Oct 14 15:30:04 crc kubenswrapper[4860]: I1014 15:30:04.064912 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" event={"ID":"d12e0b61-a53a-4f81-b2e2-8ae3efb42288","Type":"ContainerDied","Data":"76e2a775dd1cbe10e77379f68140b6a2c4244f29811b77c018ef20b3d699f282"} Oct 14 15:30:04 crc kubenswrapper[4860]: I1014 15:30:04.065232 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76e2a775dd1cbe10e77379f68140b6a2c4244f29811b77c018ef20b3d699f282" Oct 14 15:30:04 crc kubenswrapper[4860]: I1014 15:30:04.065184 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t" Oct 14 15:30:04 crc kubenswrapper[4860]: I1014 15:30:04.510569 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp"] Oct 14 15:30:04 crc kubenswrapper[4860]: I1014 15:30:04.520888 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340885-ldcfp"] Oct 14 15:30:05 crc kubenswrapper[4860]: I1014 15:30:05.075570 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd546c2-8f3f-459f-bd94-75f8d755d9e5" path="/var/lib/kubelet/pods/0fd546c2-8f3f-459f-bd94-75f8d755d9e5/volumes" Oct 14 15:30:06 crc kubenswrapper[4860]: I1014 15:30:06.061908 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:30:06 crc kubenswrapper[4860]: E1014 15:30:06.062532 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:30:17 crc kubenswrapper[4860]: I1014 15:30:17.611896 4860 scope.go:117] "RemoveContainer" containerID="a764fb83a254d629a5b1eaedcb3c26d9d0578f958f0e45462f240854fbcd0c97" Oct 14 15:30:20 crc kubenswrapper[4860]: I1014 15:30:20.062155 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:30:20 crc kubenswrapper[4860]: E1014 15:30:20.062691 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:30:32 crc kubenswrapper[4860]: I1014 15:30:32.061924 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:30:32 crc kubenswrapper[4860]: I1014 15:30:32.332345 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"5b6077d3d18fd646893014ecf5133ac8cdd7d39e8862322f0c3fde57f1da2b99"} Oct 14 15:32:59 crc kubenswrapper[4860]: I1014 15:32:59.245993 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:32:59 crc kubenswrapper[4860]: I1014 15:32:59.246744 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:33:29 crc kubenswrapper[4860]: I1014 15:33:29.245385 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:33:29 crc kubenswrapper[4860]: I1014 15:33:29.245995 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:33:59 crc kubenswrapper[4860]: I1014 15:33:59.246336 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:33:59 crc kubenswrapper[4860]: I1014 15:33:59.247000 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:33:59 crc kubenswrapper[4860]: I1014 15:33:59.247074 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:33:59 crc kubenswrapper[4860]: I1014 15:33:59.247928 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b6077d3d18fd646893014ecf5133ac8cdd7d39e8862322f0c3fde57f1da2b99"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:33:59 crc kubenswrapper[4860]: I1014 15:33:59.247981 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://5b6077d3d18fd646893014ecf5133ac8cdd7d39e8862322f0c3fde57f1da2b99" gracePeriod=600 Oct 14 15:34:00 crc kubenswrapper[4860]: I1014 15:34:00.089603 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="5b6077d3d18fd646893014ecf5133ac8cdd7d39e8862322f0c3fde57f1da2b99" exitCode=0 Oct 14 15:34:00 crc kubenswrapper[4860]: I1014 15:34:00.089682 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"5b6077d3d18fd646893014ecf5133ac8cdd7d39e8862322f0c3fde57f1da2b99"} Oct 14 15:34:00 crc kubenswrapper[4860]: I1014 15:34:00.089955 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713"} Oct 14 15:34:00 crc kubenswrapper[4860]: I1014 15:34:00.089975 4860 scope.go:117] "RemoveContainer" containerID="e4157d9f7a0e1d34dc5c0e279b1a1cfceda9238dff22a18a888db9616b9ae484" Oct 14 15:34:18 crc kubenswrapper[4860]: I1014 15:34:18.241129 4860 generic.go:334] "Generic (PLEG): container finished" podID="ad612cd6-7c9d-44c4-aa1e-33055de4eee6" containerID="5abdd1639cf70132e42b60f76c93f18e5cf1b5f23292d2c7cbae7468fc0db405" exitCode=0 Oct 14 15:34:18 crc kubenswrapper[4860]: I1014 15:34:18.241197 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" event={"ID":"ad612cd6-7c9d-44c4-aa1e-33055de4eee6","Type":"ContainerDied","Data":"5abdd1639cf70132e42b60f76c93f18e5cf1b5f23292d2c7cbae7468fc0db405"} Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.598718 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.747839 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrm4s\" (UniqueName: \"kubernetes.io/projected/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-kube-api-access-hrm4s\") pod \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.747886 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-combined-ca-bundle\") pod \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.747977 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-ssh-key\") pod \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.748184 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-secret-0\") pod \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.748883 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-inventory\") pod \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\" (UID: \"ad612cd6-7c9d-44c4-aa1e-33055de4eee6\") " Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.753295 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-kube-api-access-hrm4s" (OuterVolumeSpecName: "kube-api-access-hrm4s") pod "ad612cd6-7c9d-44c4-aa1e-33055de4eee6" (UID: "ad612cd6-7c9d-44c4-aa1e-33055de4eee6"). InnerVolumeSpecName "kube-api-access-hrm4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.760262 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ad612cd6-7c9d-44c4-aa1e-33055de4eee6" (UID: "ad612cd6-7c9d-44c4-aa1e-33055de4eee6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.775656 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-inventory" (OuterVolumeSpecName: "inventory") pod "ad612cd6-7c9d-44c4-aa1e-33055de4eee6" (UID: "ad612cd6-7c9d-44c4-aa1e-33055de4eee6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.777122 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ad612cd6-7c9d-44c4-aa1e-33055de4eee6" (UID: "ad612cd6-7c9d-44c4-aa1e-33055de4eee6"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.780201 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad612cd6-7c9d-44c4-aa1e-33055de4eee6" (UID: "ad612cd6-7c9d-44c4-aa1e-33055de4eee6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.852194 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.852233 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrm4s\" (UniqueName: \"kubernetes.io/projected/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-kube-api-access-hrm4s\") on node \"crc\" DevicePath \"\"" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.852247 4860 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.852257 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:34:19 crc kubenswrapper[4860]: I1014 15:34:19.852268 4860 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad612cd6-7c9d-44c4-aa1e-33055de4eee6-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.262270 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" event={"ID":"ad612cd6-7c9d-44c4-aa1e-33055de4eee6","Type":"ContainerDied","Data":"35a02a964e4c609b835dedfb9ea135008c14813cbda4ccf61058028b0d58dbc5"} Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.262586 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a02a964e4c609b835dedfb9ea135008c14813cbda4ccf61058028b0d58dbc5" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.262304 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.373746 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4"] Oct 14 15:34:20 crc kubenswrapper[4860]: E1014 15:34:20.374119 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad612cd6-7c9d-44c4-aa1e-33055de4eee6" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.374136 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad612cd6-7c9d-44c4-aa1e-33055de4eee6" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 15:34:20 crc kubenswrapper[4860]: E1014 15:34:20.374168 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12e0b61-a53a-4f81-b2e2-8ae3efb42288" containerName="collect-profiles" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.374197 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12e0b61-a53a-4f81-b2e2-8ae3efb42288" containerName="collect-profiles" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.374409 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12e0b61-a53a-4f81-b2e2-8ae3efb42288" containerName="collect-profiles" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.374426 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad612cd6-7c9d-44c4-aa1e-33055de4eee6" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.375359 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.387786 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.387978 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.389279 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4"] Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.394225 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.394381 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.394451 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.394668 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.395299 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.461798 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.461875 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.461910 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.461963 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.461986 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.462004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.462051 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.462413 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9s4\" (UniqueName: \"kubernetes.io/projected/5ea863c9-1241-4529-b07a-7ded53a8a9ca-kube-api-access-4n9s4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.462527 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.563937 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9s4\" (UniqueName: \"kubernetes.io/projected/5ea863c9-1241-4529-b07a-7ded53a8a9ca-kube-api-access-4n9s4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.563979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564061 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564147 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564218 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564241 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564260 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.564279 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.566315 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.569692 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.570355 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.570563 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.570695 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.570794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.571270 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.586741 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.595926 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9s4\" (UniqueName: \"kubernetes.io/projected/5ea863c9-1241-4529-b07a-7ded53a8a9ca-kube-api-access-4n9s4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-52bv4\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:20 crc kubenswrapper[4860]: I1014 15:34:20.706529 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:34:21 crc kubenswrapper[4860]: I1014 15:34:21.221261 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4"] Oct 14 15:34:21 crc kubenswrapper[4860]: I1014 15:34:21.222365 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:34:21 crc kubenswrapper[4860]: I1014 15:34:21.272562 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" event={"ID":"5ea863c9-1241-4529-b07a-7ded53a8a9ca","Type":"ContainerStarted","Data":"90b8d6c64f6141e93253d029aef4ad29bac45a46daf26fa29062ef3b6d3cb575"} Oct 14 15:34:22 crc kubenswrapper[4860]: I1014 15:34:22.282485 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" event={"ID":"5ea863c9-1241-4529-b07a-7ded53a8a9ca","Type":"ContainerStarted","Data":"aafceb9bc96f2b676daa272afcafe03bb3193c39fc4b2ff88eb4d6f9983ed0d6"} Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.204474 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" podStartSLOduration=37.049727754 podStartE2EDuration="37.204451169s" podCreationTimestamp="2025-10-14 15:34:20 +0000 UTC" firstStartedPulling="2025-10-14 15:34:21.222150714 +0000 UTC m=+2722.808934163" lastFinishedPulling="2025-10-14 15:34:21.376874139 +0000 UTC m=+2722.963657578" observedRunningTime="2025-10-14 15:34:22.298558007 +0000 UTC m=+2723.885341476" watchObservedRunningTime="2025-10-14 15:34:57.204451169 +0000 UTC m=+2758.791234628" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.207371 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-trrs6"] Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.209968 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.233999 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trrs6"] Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.279266 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-catalog-content\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.280425 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgcd\" (UniqueName: \"kubernetes.io/projected/851f1605-1f59-4dbe-9157-f89c7630651d-kube-api-access-xcgcd\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.280590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-utilities\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.381887 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcgcd\" (UniqueName: \"kubernetes.io/projected/851f1605-1f59-4dbe-9157-f89c7630651d-kube-api-access-xcgcd\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.381998 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-utilities\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.382669 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-utilities\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.382864 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-catalog-content\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.383181 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-catalog-content\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.416701 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcgcd\" (UniqueName: \"kubernetes.io/projected/851f1605-1f59-4dbe-9157-f89c7630651d-kube-api-access-xcgcd\") pod \"community-operators-trrs6\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:57 crc kubenswrapper[4860]: I1014 15:34:57.531533 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:34:58 crc kubenswrapper[4860]: I1014 15:34:58.208263 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trrs6"] Oct 14 15:34:58 crc kubenswrapper[4860]: I1014 15:34:58.587638 4860 generic.go:334] "Generic (PLEG): container finished" podID="851f1605-1f59-4dbe-9157-f89c7630651d" containerID="a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57" exitCode=0 Oct 14 15:34:58 crc kubenswrapper[4860]: I1014 15:34:58.587969 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerDied","Data":"a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57"} Oct 14 15:34:58 crc kubenswrapper[4860]: I1014 15:34:58.587998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerStarted","Data":"ed04bb977d6f3ded31419e142a501faf5e3be989f5984591eb962920a67a2eda"} Oct 14 15:35:00 crc kubenswrapper[4860]: I1014 15:35:00.609568 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerStarted","Data":"dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13"} Oct 14 15:35:01 crc kubenswrapper[4860]: I1014 15:35:01.620072 4860 generic.go:334] "Generic (PLEG): container finished" podID="851f1605-1f59-4dbe-9157-f89c7630651d" containerID="dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13" exitCode=0 Oct 14 15:35:01 crc kubenswrapper[4860]: I1014 15:35:01.620141 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerDied","Data":"dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13"} Oct 14 15:35:02 crc kubenswrapper[4860]: I1014 15:35:02.630596 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerStarted","Data":"b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e"} Oct 14 15:35:02 crc kubenswrapper[4860]: I1014 15:35:02.654371 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-trrs6" podStartSLOduration=1.938391861 podStartE2EDuration="5.654349912s" podCreationTimestamp="2025-10-14 15:34:57 +0000 UTC" firstStartedPulling="2025-10-14 15:34:58.589530082 +0000 UTC m=+2760.176313531" lastFinishedPulling="2025-10-14 15:35:02.305488133 +0000 UTC m=+2763.892271582" observedRunningTime="2025-10-14 15:35:02.64767813 +0000 UTC m=+2764.234461579" watchObservedRunningTime="2025-10-14 15:35:02.654349912 +0000 UTC m=+2764.241133361" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.576161 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8lsh"] Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.578794 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.604298 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8lsh"] Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.740680 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbps\" (UniqueName: \"kubernetes.io/projected/2cabf421-ae16-4253-aaf0-7711164d92e4-kube-api-access-rkbps\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.741193 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-utilities\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.741576 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-catalog-content\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.843516 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-utilities\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.843600 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-catalog-content\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.843632 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbps\" (UniqueName: \"kubernetes.io/projected/2cabf421-ae16-4253-aaf0-7711164d92e4-kube-api-access-rkbps\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.844446 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-utilities\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.844529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-catalog-content\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.880007 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbps\" (UniqueName: \"kubernetes.io/projected/2cabf421-ae16-4253-aaf0-7711164d92e4-kube-api-access-rkbps\") pod \"certified-operators-x8lsh\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:04 crc kubenswrapper[4860]: I1014 15:35:04.905646 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:05 crc kubenswrapper[4860]: I1014 15:35:05.557124 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8lsh"] Oct 14 15:35:05 crc kubenswrapper[4860]: I1014 15:35:05.655793 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerStarted","Data":"304eaeb213f6133197fc009c444f3f9c2d800bab9b3fbd6cc55262a43fcb6d67"} Oct 14 15:35:06 crc kubenswrapper[4860]: I1014 15:35:06.664396 4860 generic.go:334] "Generic (PLEG): container finished" podID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerID="b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba" exitCode=0 Oct 14 15:35:06 crc kubenswrapper[4860]: I1014 15:35:06.664455 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerDied","Data":"b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba"} Oct 14 15:35:07 crc kubenswrapper[4860]: I1014 15:35:07.532551 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:35:07 crc kubenswrapper[4860]: I1014 15:35:07.532993 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:35:07 crc kubenswrapper[4860]: I1014 15:35:07.580260 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:35:07 crc kubenswrapper[4860]: I1014 15:35:07.717431 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:35:08 crc kubenswrapper[4860]: I1014 15:35:08.684555 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerStarted","Data":"6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94"} Oct 14 15:35:08 crc kubenswrapper[4860]: I1014 15:35:08.773480 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trrs6"] Oct 14 15:35:10 crc kubenswrapper[4860]: I1014 15:35:10.705448 4860 generic.go:334] "Generic (PLEG): container finished" podID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerID="6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94" exitCode=0 Oct 14 15:35:10 crc kubenswrapper[4860]: I1014 15:35:10.705524 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerDied","Data":"6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94"} Oct 14 15:35:10 crc kubenswrapper[4860]: I1014 15:35:10.706380 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-trrs6" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="registry-server" containerID="cri-o://b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e" gracePeriod=2 Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.227175 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.367874 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-utilities\") pod \"851f1605-1f59-4dbe-9157-f89c7630651d\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.368279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-catalog-content\") pod \"851f1605-1f59-4dbe-9157-f89c7630651d\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.368391 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgcd\" (UniqueName: \"kubernetes.io/projected/851f1605-1f59-4dbe-9157-f89c7630651d-kube-api-access-xcgcd\") pod \"851f1605-1f59-4dbe-9157-f89c7630651d\" (UID: \"851f1605-1f59-4dbe-9157-f89c7630651d\") " Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.379544 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851f1605-1f59-4dbe-9157-f89c7630651d-kube-api-access-xcgcd" (OuterVolumeSpecName: "kube-api-access-xcgcd") pod "851f1605-1f59-4dbe-9157-f89c7630651d" (UID: "851f1605-1f59-4dbe-9157-f89c7630651d"). InnerVolumeSpecName "kube-api-access-xcgcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.379892 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-utilities" (OuterVolumeSpecName: "utilities") pod "851f1605-1f59-4dbe-9157-f89c7630651d" (UID: "851f1605-1f59-4dbe-9157-f89c7630651d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.415987 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "851f1605-1f59-4dbe-9157-f89c7630651d" (UID: "851f1605-1f59-4dbe-9157-f89c7630651d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.471061 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.471335 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f1605-1f59-4dbe-9157-f89c7630651d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.471405 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgcd\" (UniqueName: \"kubernetes.io/projected/851f1605-1f59-4dbe-9157-f89c7630651d-kube-api-access-xcgcd\") on node \"crc\" DevicePath \"\"" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.717353 4860 generic.go:334] "Generic (PLEG): container finished" podID="851f1605-1f59-4dbe-9157-f89c7630651d" containerID="b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e" exitCode=0 Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.717426 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerDied","Data":"b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e"} Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.717446 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trrs6" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.719090 4860 scope.go:117] "RemoveContainer" containerID="b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.719228 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trrs6" event={"ID":"851f1605-1f59-4dbe-9157-f89c7630651d","Type":"ContainerDied","Data":"ed04bb977d6f3ded31419e142a501faf5e3be989f5984591eb962920a67a2eda"} Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.723790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerStarted","Data":"af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c"} Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.747701 4860 scope.go:117] "RemoveContainer" containerID="dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.765010 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8lsh" podStartSLOduration=3.320655989 podStartE2EDuration="7.764989693s" podCreationTimestamp="2025-10-14 15:35:04 +0000 UTC" firstStartedPulling="2025-10-14 15:35:06.665871135 +0000 UTC m=+2768.252654584" lastFinishedPulling="2025-10-14 15:35:11.110204829 +0000 UTC m=+2772.696988288" observedRunningTime="2025-10-14 15:35:11.74763418 +0000 UTC m=+2773.334417629" watchObservedRunningTime="2025-10-14 15:35:11.764989693 +0000 UTC m=+2773.351773142" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.779905 4860 scope.go:117] "RemoveContainer" containerID="a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.780426 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trrs6"] Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.789127 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-trrs6"] Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.799302 4860 scope.go:117] "RemoveContainer" containerID="b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e" Oct 14 15:35:11 crc kubenswrapper[4860]: E1014 15:35:11.799816 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e\": container with ID starting with b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e not found: ID does not exist" containerID="b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.799861 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e"} err="failed to get container status \"b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e\": rpc error: code = NotFound desc = could not find container \"b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e\": container with ID starting with b7b1fbcdda57cde36a44133f5cb519adea0576629fd4eef1e9e3d52a61dee24e not found: ID does not exist" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.799887 4860 scope.go:117] "RemoveContainer" containerID="dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13" Oct 14 15:35:11 crc kubenswrapper[4860]: E1014 15:35:11.800367 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13\": container with ID starting with dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13 not found: ID does not exist" containerID="dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.800396 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13"} err="failed to get container status \"dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13\": rpc error: code = NotFound desc = could not find container \"dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13\": container with ID starting with dc7722e0db55083091fce869fa7d31bbe026504de2fd7523bc8124a5f7af4b13 not found: ID does not exist" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.800411 4860 scope.go:117] "RemoveContainer" containerID="a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57" Oct 14 15:35:11 crc kubenswrapper[4860]: E1014 15:35:11.800867 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57\": container with ID starting with a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57 not found: ID does not exist" containerID="a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57" Oct 14 15:35:11 crc kubenswrapper[4860]: I1014 15:35:11.800897 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57"} err="failed to get container status \"a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57\": rpc error: code = NotFound desc = could not find container \"a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57\": container with ID starting with a3188e429817307fe32723d9aa0a31aed77b6f059c67a9864088d4b114137d57 not found: ID does not exist" Oct 14 15:35:13 crc kubenswrapper[4860]: I1014 15:35:13.075004 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" path="/var/lib/kubelet/pods/851f1605-1f59-4dbe-9157-f89c7630651d/volumes" Oct 14 15:35:14 crc kubenswrapper[4860]: I1014 15:35:14.906259 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:14 crc kubenswrapper[4860]: I1014 15:35:14.906323 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:15 crc kubenswrapper[4860]: I1014 15:35:15.954737 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-x8lsh" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="registry-server" probeResult="failure" output=< Oct 14 15:35:15 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:35:15 crc kubenswrapper[4860]: > Oct 14 15:35:24 crc kubenswrapper[4860]: I1014 15:35:24.953365 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:25 crc kubenswrapper[4860]: I1014 15:35:25.001951 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:25 crc kubenswrapper[4860]: I1014 15:35:25.188724 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8lsh"] Oct 14 15:35:26 crc kubenswrapper[4860]: I1014 15:35:26.847240 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8lsh" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="registry-server" containerID="cri-o://af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c" gracePeriod=2 Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.298610 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.455964 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkbps\" (UniqueName: \"kubernetes.io/projected/2cabf421-ae16-4253-aaf0-7711164d92e4-kube-api-access-rkbps\") pod \"2cabf421-ae16-4253-aaf0-7711164d92e4\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.456082 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-catalog-content\") pod \"2cabf421-ae16-4253-aaf0-7711164d92e4\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.456160 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-utilities\") pod \"2cabf421-ae16-4253-aaf0-7711164d92e4\" (UID: \"2cabf421-ae16-4253-aaf0-7711164d92e4\") " Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.457000 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-utilities" (OuterVolumeSpecName: "utilities") pod "2cabf421-ae16-4253-aaf0-7711164d92e4" (UID: "2cabf421-ae16-4253-aaf0-7711164d92e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.462403 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cabf421-ae16-4253-aaf0-7711164d92e4-kube-api-access-rkbps" (OuterVolumeSpecName: "kube-api-access-rkbps") pod "2cabf421-ae16-4253-aaf0-7711164d92e4" (UID: "2cabf421-ae16-4253-aaf0-7711164d92e4"). InnerVolumeSpecName "kube-api-access-rkbps". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.502731 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cabf421-ae16-4253-aaf0-7711164d92e4" (UID: "2cabf421-ae16-4253-aaf0-7711164d92e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.558828 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.558870 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkbps\" (UniqueName: \"kubernetes.io/projected/2cabf421-ae16-4253-aaf0-7711164d92e4-kube-api-access-rkbps\") on node \"crc\" DevicePath \"\"" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.558880 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabf421-ae16-4253-aaf0-7711164d92e4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.858604 4860 generic.go:334] "Generic (PLEG): container finished" podID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerID="af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c" exitCode=0 Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.858666 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lsh" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.858684 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerDied","Data":"af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c"} Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.859955 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lsh" event={"ID":"2cabf421-ae16-4253-aaf0-7711164d92e4","Type":"ContainerDied","Data":"304eaeb213f6133197fc009c444f3f9c2d800bab9b3fbd6cc55262a43fcb6d67"} Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.859977 4860 scope.go:117] "RemoveContainer" containerID="af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.883718 4860 scope.go:117] "RemoveContainer" containerID="6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.906473 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8lsh"] Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.913415 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8lsh"] Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.918713 4860 scope.go:117] "RemoveContainer" containerID="b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.970792 4860 scope.go:117] "RemoveContainer" containerID="af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c" Oct 14 15:35:27 crc kubenswrapper[4860]: E1014 15:35:27.971316 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c\": container with ID starting with af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c not found: ID does not exist" containerID="af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.971356 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c"} err="failed to get container status \"af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c\": rpc error: code = NotFound desc = could not find container \"af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c\": container with ID starting with af4cc049de0496085548344d38904a3311d8955e25488ec975aaf6175b7b420c not found: ID does not exist" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.971382 4860 scope.go:117] "RemoveContainer" containerID="6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94" Oct 14 15:35:27 crc kubenswrapper[4860]: E1014 15:35:27.971707 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94\": container with ID starting with 6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94 not found: ID does not exist" containerID="6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.971733 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94"} err="failed to get container status \"6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94\": rpc error: code = NotFound desc = could not find container \"6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94\": container with ID starting with 6455728bc459bf920ff79a88b09cab19f0c321dda39ebb56015e2eef944dfa94 not found: ID does not exist" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.971755 4860 scope.go:117] "RemoveContainer" containerID="b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba" Oct 14 15:35:27 crc kubenswrapper[4860]: E1014 15:35:27.971977 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba\": container with ID starting with b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba not found: ID does not exist" containerID="b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba" Oct 14 15:35:27 crc kubenswrapper[4860]: I1014 15:35:27.971994 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba"} err="failed to get container status \"b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba\": rpc error: code = NotFound desc = could not find container \"b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba\": container with ID starting with b7573cec60e2828f3771005b1da7447a6020738f86f668a43d60a7dc825fe1ba not found: ID does not exist" Oct 14 15:35:29 crc kubenswrapper[4860]: I1014 15:35:29.073902 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" path="/var/lib/kubelet/pods/2cabf421-ae16-4253-aaf0-7711164d92e4/volumes" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.596067 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vj6vc"] Oct 14 15:35:58 crc kubenswrapper[4860]: E1014 15:35:58.597110 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="extract-content" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597132 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="extract-content" Oct 14 15:35:58 crc kubenswrapper[4860]: E1014 15:35:58.597149 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="registry-server" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="registry-server" Oct 14 15:35:58 crc kubenswrapper[4860]: E1014 15:35:58.597185 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="extract-utilities" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597194 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="extract-utilities" Oct 14 15:35:58 crc kubenswrapper[4860]: E1014 15:35:58.597207 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="registry-server" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597214 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="registry-server" Oct 14 15:35:58 crc kubenswrapper[4860]: E1014 15:35:58.597231 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="extract-content" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597238 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="extract-content" Oct 14 15:35:58 crc kubenswrapper[4860]: E1014 15:35:58.597257 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="extract-utilities" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597264 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="extract-utilities" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597499 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="851f1605-1f59-4dbe-9157-f89c7630651d" containerName="registry-server" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.597518 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cabf421-ae16-4253-aaf0-7711164d92e4" containerName="registry-server" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.599176 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.615758 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vj6vc"] Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.701738 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-catalog-content\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.701803 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-utilities\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.702220 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktpw7\" (UniqueName: \"kubernetes.io/projected/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-kube-api-access-ktpw7\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.804215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-utilities\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.804352 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktpw7\" (UniqueName: \"kubernetes.io/projected/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-kube-api-access-ktpw7\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.804497 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-catalog-content\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.804749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-utilities\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.804782 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-catalog-content\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.825537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktpw7\" (UniqueName: \"kubernetes.io/projected/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-kube-api-access-ktpw7\") pod \"redhat-operators-vj6vc\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:58 crc kubenswrapper[4860]: I1014 15:35:58.931627 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:35:59 crc kubenswrapper[4860]: I1014 15:35:59.245968 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:35:59 crc kubenswrapper[4860]: I1014 15:35:59.246023 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:35:59 crc kubenswrapper[4860]: I1014 15:35:59.421862 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vj6vc"] Oct 14 15:36:00 crc kubenswrapper[4860]: I1014 15:36:00.182636 4860 generic.go:334] "Generic (PLEG): container finished" podID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerID="1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee" exitCode=0 Oct 14 15:36:00 crc kubenswrapper[4860]: I1014 15:36:00.182738 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerDied","Data":"1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee"} Oct 14 15:36:00 crc kubenswrapper[4860]: I1014 15:36:00.183944 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerStarted","Data":"4589441b8c2f977748d7604646d4028ebc90b9c476b5f2730ba4f58739c57041"} Oct 14 15:36:02 crc kubenswrapper[4860]: I1014 15:36:02.202705 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerStarted","Data":"609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2"} Oct 14 15:36:07 crc kubenswrapper[4860]: I1014 15:36:07.246928 4860 generic.go:334] "Generic (PLEG): container finished" podID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerID="609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2" exitCode=0 Oct 14 15:36:07 crc kubenswrapper[4860]: I1014 15:36:07.246985 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerDied","Data":"609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2"} Oct 14 15:36:08 crc kubenswrapper[4860]: I1014 15:36:08.256658 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerStarted","Data":"9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3"} Oct 14 15:36:08 crc kubenswrapper[4860]: I1014 15:36:08.279408 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vj6vc" podStartSLOduration=2.819083583 podStartE2EDuration="10.279385835s" podCreationTimestamp="2025-10-14 15:35:58 +0000 UTC" firstStartedPulling="2025-10-14 15:36:00.186043158 +0000 UTC m=+2821.772826607" lastFinishedPulling="2025-10-14 15:36:07.64634541 +0000 UTC m=+2829.233128859" observedRunningTime="2025-10-14 15:36:08.272497698 +0000 UTC m=+2829.859281157" watchObservedRunningTime="2025-10-14 15:36:08.279385835 +0000 UTC m=+2829.866169284" Oct 14 15:36:08 crc kubenswrapper[4860]: I1014 15:36:08.938633 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:36:08 crc kubenswrapper[4860]: I1014 15:36:08.939072 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:36:09 crc kubenswrapper[4860]: I1014 15:36:09.990562 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vj6vc" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" probeResult="failure" output=< Oct 14 15:36:09 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:36:09 crc kubenswrapper[4860]: > Oct 14 15:36:19 crc kubenswrapper[4860]: I1014 15:36:19.987157 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vj6vc" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" probeResult="failure" output=< Oct 14 15:36:19 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:36:19 crc kubenswrapper[4860]: > Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.455260 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5nl9j"] Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.457579 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.472544 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nl9j"] Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.606992 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-utilities\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.607106 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j725m\" (UniqueName: \"kubernetes.io/projected/3537b928-b237-45e1-a5e0-50380cbc1cc8-kube-api-access-j725m\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.607398 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-catalog-content\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.709528 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-utilities\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.709599 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j725m\" (UniqueName: \"kubernetes.io/projected/3537b928-b237-45e1-a5e0-50380cbc1cc8-kube-api-access-j725m\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.709736 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-catalog-content\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.710239 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-utilities\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.710896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-catalog-content\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.730263 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j725m\" (UniqueName: \"kubernetes.io/projected/3537b928-b237-45e1-a5e0-50380cbc1cc8-kube-api-access-j725m\") pod \"redhat-marketplace-5nl9j\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:22 crc kubenswrapper[4860]: I1014 15:36:22.783379 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:23 crc kubenswrapper[4860]: I1014 15:36:23.335780 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nl9j"] Oct 14 15:36:23 crc kubenswrapper[4860]: I1014 15:36:23.420190 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nl9j" event={"ID":"3537b928-b237-45e1-a5e0-50380cbc1cc8","Type":"ContainerStarted","Data":"fc53b8d366bdc48f1a8d102f9e70c6cde17d0995ebac6c2e40196a18737d83d8"} Oct 14 15:36:24 crc kubenswrapper[4860]: I1014 15:36:24.436480 4860 generic.go:334] "Generic (PLEG): container finished" podID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerID="7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108" exitCode=0 Oct 14 15:36:24 crc kubenswrapper[4860]: I1014 15:36:24.436635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nl9j" event={"ID":"3537b928-b237-45e1-a5e0-50380cbc1cc8","Type":"ContainerDied","Data":"7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108"} Oct 14 15:36:26 crc kubenswrapper[4860]: I1014 15:36:26.457254 4860 generic.go:334] "Generic (PLEG): container finished" podID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerID="a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c" exitCode=0 Oct 14 15:36:26 crc kubenswrapper[4860]: I1014 15:36:26.457382 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nl9j" event={"ID":"3537b928-b237-45e1-a5e0-50380cbc1cc8","Type":"ContainerDied","Data":"a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c"} Oct 14 15:36:27 crc kubenswrapper[4860]: I1014 15:36:27.476660 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nl9j" event={"ID":"3537b928-b237-45e1-a5e0-50380cbc1cc8","Type":"ContainerStarted","Data":"3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2"} Oct 14 15:36:27 crc kubenswrapper[4860]: I1014 15:36:27.509970 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5nl9j" podStartSLOduration=2.862523424 podStartE2EDuration="5.509948546s" podCreationTimestamp="2025-10-14 15:36:22 +0000 UTC" firstStartedPulling="2025-10-14 15:36:24.439876552 +0000 UTC m=+2846.026660001" lastFinishedPulling="2025-10-14 15:36:27.087301664 +0000 UTC m=+2848.674085123" observedRunningTime="2025-10-14 15:36:27.503093659 +0000 UTC m=+2849.089877108" watchObservedRunningTime="2025-10-14 15:36:27.509948546 +0000 UTC m=+2849.096732015" Oct 14 15:36:29 crc kubenswrapper[4860]: I1014 15:36:29.245229 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:36:29 crc kubenswrapper[4860]: I1014 15:36:29.245842 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:36:29 crc kubenswrapper[4860]: I1014 15:36:29.981236 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vj6vc" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" probeResult="failure" output=< Oct 14 15:36:29 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:36:29 crc kubenswrapper[4860]: > Oct 14 15:36:32 crc kubenswrapper[4860]: I1014 15:36:32.783782 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:32 crc kubenswrapper[4860]: I1014 15:36:32.784123 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:32 crc kubenswrapper[4860]: I1014 15:36:32.832459 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:33 crc kubenswrapper[4860]: I1014 15:36:33.582263 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:35 crc kubenswrapper[4860]: I1014 15:36:35.643034 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nl9j"] Oct 14 15:36:35 crc kubenswrapper[4860]: I1014 15:36:35.643632 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5nl9j" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="registry-server" containerID="cri-o://3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2" gracePeriod=2 Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.082382 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.179703 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j725m\" (UniqueName: \"kubernetes.io/projected/3537b928-b237-45e1-a5e0-50380cbc1cc8-kube-api-access-j725m\") pod \"3537b928-b237-45e1-a5e0-50380cbc1cc8\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.180127 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-catalog-content\") pod \"3537b928-b237-45e1-a5e0-50380cbc1cc8\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.180275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-utilities\") pod \"3537b928-b237-45e1-a5e0-50380cbc1cc8\" (UID: \"3537b928-b237-45e1-a5e0-50380cbc1cc8\") " Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.182127 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-utilities" (OuterVolumeSpecName: "utilities") pod "3537b928-b237-45e1-a5e0-50380cbc1cc8" (UID: "3537b928-b237-45e1-a5e0-50380cbc1cc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.188161 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3537b928-b237-45e1-a5e0-50380cbc1cc8-kube-api-access-j725m" (OuterVolumeSpecName: "kube-api-access-j725m") pod "3537b928-b237-45e1-a5e0-50380cbc1cc8" (UID: "3537b928-b237-45e1-a5e0-50380cbc1cc8"). InnerVolumeSpecName "kube-api-access-j725m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.194123 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3537b928-b237-45e1-a5e0-50380cbc1cc8" (UID: "3537b928-b237-45e1-a5e0-50380cbc1cc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.283867 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.283942 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537b928-b237-45e1-a5e0-50380cbc1cc8-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.283960 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j725m\" (UniqueName: \"kubernetes.io/projected/3537b928-b237-45e1-a5e0-50380cbc1cc8-kube-api-access-j725m\") on node \"crc\" DevicePath \"\"" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.560715 4860 generic.go:334] "Generic (PLEG): container finished" podID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerID="3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2" exitCode=0 Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.560761 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nl9j" event={"ID":"3537b928-b237-45e1-a5e0-50380cbc1cc8","Type":"ContainerDied","Data":"3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2"} Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.560837 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nl9j" event={"ID":"3537b928-b237-45e1-a5e0-50380cbc1cc8","Type":"ContainerDied","Data":"fc53b8d366bdc48f1a8d102f9e70c6cde17d0995ebac6c2e40196a18737d83d8"} Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.560834 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nl9j" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.560857 4860 scope.go:117] "RemoveContainer" containerID="3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.596517 4860 scope.go:117] "RemoveContainer" containerID="a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.600284 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nl9j"] Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.609818 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nl9j"] Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.616018 4860 scope.go:117] "RemoveContainer" containerID="7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.656213 4860 scope.go:117] "RemoveContainer" containerID="3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2" Oct 14 15:36:36 crc kubenswrapper[4860]: E1014 15:36:36.656923 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2\": container with ID starting with 3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2 not found: ID does not exist" containerID="3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.656959 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2"} err="failed to get container status \"3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2\": rpc error: code = NotFound desc = could not find container \"3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2\": container with ID starting with 3cd50adf582d600f4364acac26f902873774a3d5f78861b183af693521db52e2 not found: ID does not exist" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.656987 4860 scope.go:117] "RemoveContainer" containerID="a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c" Oct 14 15:36:36 crc kubenswrapper[4860]: E1014 15:36:36.657241 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c\": container with ID starting with a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c not found: ID does not exist" containerID="a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.657269 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c"} err="failed to get container status \"a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c\": rpc error: code = NotFound desc = could not find container \"a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c\": container with ID starting with a4ca9093f6bfdc92df5570bd7e24de9ecd409bf7c7e8707aff48e6cdca95b96c not found: ID does not exist" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.657290 4860 scope.go:117] "RemoveContainer" containerID="7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108" Oct 14 15:36:36 crc kubenswrapper[4860]: E1014 15:36:36.657561 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108\": container with ID starting with 7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108 not found: ID does not exist" containerID="7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108" Oct 14 15:36:36 crc kubenswrapper[4860]: I1014 15:36:36.657588 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108"} err="failed to get container status \"7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108\": rpc error: code = NotFound desc = could not find container \"7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108\": container with ID starting with 7058c0597223624c84d8bd58ff595c2df87c83aa830bc92acb618a3e3b125108 not found: ID does not exist" Oct 14 15:36:37 crc kubenswrapper[4860]: I1014 15:36:37.071568 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" path="/var/lib/kubelet/pods/3537b928-b237-45e1-a5e0-50380cbc1cc8/volumes" Oct 14 15:36:38 crc kubenswrapper[4860]: I1014 15:36:38.990456 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:36:39 crc kubenswrapper[4860]: I1014 15:36:39.050267 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:36:39 crc kubenswrapper[4860]: I1014 15:36:39.843859 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vj6vc"] Oct 14 15:36:40 crc kubenswrapper[4860]: I1014 15:36:40.599089 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vj6vc" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" containerID="cri-o://9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3" gracePeriod=2 Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.050543 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.177209 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-utilities\") pod \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.177441 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktpw7\" (UniqueName: \"kubernetes.io/projected/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-kube-api-access-ktpw7\") pod \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.177565 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-catalog-content\") pod \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\" (UID: \"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22\") " Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.182427 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-utilities" (OuterVolumeSpecName: "utilities") pod "a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" (UID: "a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.185715 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-kube-api-access-ktpw7" (OuterVolumeSpecName: "kube-api-access-ktpw7") pod "a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" (UID: "a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22"). InnerVolumeSpecName "kube-api-access-ktpw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.269900 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" (UID: "a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.280867 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.280914 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktpw7\" (UniqueName: \"kubernetes.io/projected/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-kube-api-access-ktpw7\") on node \"crc\" DevicePath \"\"" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.280951 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.609837 4860 generic.go:334] "Generic (PLEG): container finished" podID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerID="9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3" exitCode=0 Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.610282 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj6vc" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.610336 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerDied","Data":"9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3"} Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.610780 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj6vc" event={"ID":"a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22","Type":"ContainerDied","Data":"4589441b8c2f977748d7604646d4028ebc90b9c476b5f2730ba4f58739c57041"} Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.610826 4860 scope.go:117] "RemoveContainer" containerID="9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.632881 4860 scope.go:117] "RemoveContainer" containerID="609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.651943 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vj6vc"] Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.659353 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vj6vc"] Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.664458 4860 scope.go:117] "RemoveContainer" containerID="1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.706714 4860 scope.go:117] "RemoveContainer" containerID="9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3" Oct 14 15:36:41 crc kubenswrapper[4860]: E1014 15:36:41.707239 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3\": container with ID starting with 9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3 not found: ID does not exist" containerID="9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.707286 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3"} err="failed to get container status \"9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3\": rpc error: code = NotFound desc = could not find container \"9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3\": container with ID starting with 9406c76ca69d253ab4da94eec5f2189f407d212f573856b01dfa5f2d85c6eac3 not found: ID does not exist" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.707312 4860 scope.go:117] "RemoveContainer" containerID="609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2" Oct 14 15:36:41 crc kubenswrapper[4860]: E1014 15:36:41.707731 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2\": container with ID starting with 609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2 not found: ID does not exist" containerID="609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.707760 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2"} err="failed to get container status \"609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2\": rpc error: code = NotFound desc = could not find container \"609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2\": container with ID starting with 609062045f5f7dc15c8ae3fb4769fd46e4c478fa8677f1ab4d8d2577d2d6d2f2 not found: ID does not exist" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.707782 4860 scope.go:117] "RemoveContainer" containerID="1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee" Oct 14 15:36:41 crc kubenswrapper[4860]: E1014 15:36:41.708581 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee\": container with ID starting with 1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee not found: ID does not exist" containerID="1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee" Oct 14 15:36:41 crc kubenswrapper[4860]: I1014 15:36:41.708607 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee"} err="failed to get container status \"1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee\": rpc error: code = NotFound desc = could not find container \"1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee\": container with ID starting with 1d398494bc4806a811c87c274db03842a74870beaefbcbef1a82454bac0960ee not found: ID does not exist" Oct 14 15:36:43 crc kubenswrapper[4860]: I1014 15:36:43.074312 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" path="/var/lib/kubelet/pods/a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22/volumes" Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.246109 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.246597 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.246637 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.247118 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.247162 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" gracePeriod=600 Oct 14 15:36:59 crc kubenswrapper[4860]: E1014 15:36:59.374929 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.761460 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" exitCode=0 Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.761522 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713"} Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.761571 4860 scope.go:117] "RemoveContainer" containerID="5b6077d3d18fd646893014ecf5133ac8cdd7d39e8862322f0c3fde57f1da2b99" Oct 14 15:36:59 crc kubenswrapper[4860]: I1014 15:36:59.762272 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:36:59 crc kubenswrapper[4860]: E1014 15:36:59.762598 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:37:15 crc kubenswrapper[4860]: I1014 15:37:15.062264 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:37:15 crc kubenswrapper[4860]: E1014 15:37:15.065922 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:37:30 crc kubenswrapper[4860]: I1014 15:37:30.061267 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:37:30 crc kubenswrapper[4860]: E1014 15:37:30.062099 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:37:42 crc kubenswrapper[4860]: I1014 15:37:42.062541 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:37:42 crc kubenswrapper[4860]: E1014 15:37:42.064365 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:37:57 crc kubenswrapper[4860]: I1014 15:37:57.062214 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:37:57 crc kubenswrapper[4860]: E1014 15:37:57.063353 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:38:01 crc kubenswrapper[4860]: I1014 15:38:01.282412 4860 generic.go:334] "Generic (PLEG): container finished" podID="5ea863c9-1241-4529-b07a-7ded53a8a9ca" containerID="aafceb9bc96f2b676daa272afcafe03bb3193c39fc4b2ff88eb4d6f9983ed0d6" exitCode=0 Oct 14 15:38:01 crc kubenswrapper[4860]: I1014 15:38:01.282518 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" event={"ID":"5ea863c9-1241-4529-b07a-7ded53a8a9ca","Type":"ContainerDied","Data":"aafceb9bc96f2b676daa272afcafe03bb3193c39fc4b2ff88eb4d6f9983ed0d6"} Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.714895 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.882706 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-0\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.882780 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-inventory\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.882875 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-combined-ca-bundle\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.882935 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-1\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.882999 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-ssh-key\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.883072 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9s4\" (UniqueName: \"kubernetes.io/projected/5ea863c9-1241-4529-b07a-7ded53a8a9ca-kube-api-access-4n9s4\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.883168 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-extra-config-0\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.883199 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-0\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.883228 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-1\") pod \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\" (UID: \"5ea863c9-1241-4529-b07a-7ded53a8a9ca\") " Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.891099 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea863c9-1241-4529-b07a-7ded53a8a9ca-kube-api-access-4n9s4" (OuterVolumeSpecName: "kube-api-access-4n9s4") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "kube-api-access-4n9s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.892600 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.917561 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.921606 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.923407 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-inventory" (OuterVolumeSpecName: "inventory") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.923442 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.925002 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.927602 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.942158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5ea863c9-1241-4529-b07a-7ded53a8a9ca" (UID: "5ea863c9-1241-4529-b07a-7ded53a8a9ca"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.986298 4860 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.987538 4860 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.987745 4860 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.987861 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.987960 4860 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.988085 4860 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.988231 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ea863c9-1241-4529-b07a-7ded53a8a9ca-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.988334 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9s4\" (UniqueName: \"kubernetes.io/projected/5ea863c9-1241-4529-b07a-7ded53a8a9ca-kube-api-access-4n9s4\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:02 crc kubenswrapper[4860]: I1014 15:38:02.988604 4860 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ea863c9-1241-4529-b07a-7ded53a8a9ca-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.301280 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" event={"ID":"5ea863c9-1241-4529-b07a-7ded53a8a9ca","Type":"ContainerDied","Data":"90b8d6c64f6141e93253d029aef4ad29bac45a46daf26fa29062ef3b6d3cb575"} Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.301326 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-52bv4" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.301331 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b8d6c64f6141e93253d029aef4ad29bac45a46daf26fa29062ef3b6d3cb575" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404013 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw"] Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404433 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="extract-content" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404450 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="extract-content" Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404466 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="registry-server" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404472 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="registry-server" Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404490 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea863c9-1241-4529-b07a-7ded53a8a9ca" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404498 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea863c9-1241-4529-b07a-7ded53a8a9ca" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404508 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404514 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404525 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="extract-content" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404531 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="extract-content" Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404568 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="extract-utilities" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404576 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="extract-utilities" Oct 14 15:38:03 crc kubenswrapper[4860]: E1014 15:38:03.404594 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="extract-utilities" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404599 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="extract-utilities" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404791 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3537b928-b237-45e1-a5e0-50380cbc1cc8" containerName="registry-server" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404812 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea863c9-1241-4529-b07a-7ded53a8a9ca" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.404826 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d61b1b-ae42-4621-a7e8-57fb4e4ecd22" containerName="registry-server" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.405623 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.407579 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.407739 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.407921 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.408233 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9ftfz" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.409246 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.430465 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw"] Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.599620 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.599746 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.599783 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.599898 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6658s\" (UniqueName: \"kubernetes.io/projected/567e371c-991d-4515-98bf-b17f6573a744-kube-api-access-6658s\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.600089 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.600144 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.600350 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701449 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701493 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701513 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6658s\" (UniqueName: \"kubernetes.io/projected/567e371c-991d-4515-98bf-b17f6573a744-kube-api-access-6658s\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701564 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701634 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.701678 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.705268 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.705481 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.705738 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.706466 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.707290 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.708912 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.721739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6658s\" (UniqueName: \"kubernetes.io/projected/567e371c-991d-4515-98bf-b17f6573a744-kube-api-access-6658s\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:03 crc kubenswrapper[4860]: I1014 15:38:03.765733 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:38:04 crc kubenswrapper[4860]: I1014 15:38:04.287651 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw"] Oct 14 15:38:04 crc kubenswrapper[4860]: I1014 15:38:04.311944 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" event={"ID":"567e371c-991d-4515-98bf-b17f6573a744","Type":"ContainerStarted","Data":"e9f7f01a35bd45f61c16acfc12ca5ad6ffb21be37f16f51b06eb876c1f882b83"} Oct 14 15:38:05 crc kubenswrapper[4860]: I1014 15:38:05.322441 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" event={"ID":"567e371c-991d-4515-98bf-b17f6573a744","Type":"ContainerStarted","Data":"f607888ae7f87a0159f6d268ee77c69f0ce0f8da1e097a53822f07656b772a7d"} Oct 14 15:38:05 crc kubenswrapper[4860]: I1014 15:38:05.354082 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" podStartSLOduration=2.204483597 podStartE2EDuration="2.354054222s" podCreationTimestamp="2025-10-14 15:38:03 +0000 UTC" firstStartedPulling="2025-10-14 15:38:04.292866731 +0000 UTC m=+2945.879650180" lastFinishedPulling="2025-10-14 15:38:04.442437356 +0000 UTC m=+2946.029220805" observedRunningTime="2025-10-14 15:38:05.341596179 +0000 UTC m=+2946.928379628" watchObservedRunningTime="2025-10-14 15:38:05.354054222 +0000 UTC m=+2946.940837681" Oct 14 15:38:10 crc kubenswrapper[4860]: I1014 15:38:10.062539 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:38:10 crc kubenswrapper[4860]: E1014 15:38:10.064281 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:38:25 crc kubenswrapper[4860]: I1014 15:38:25.062111 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:38:25 crc kubenswrapper[4860]: E1014 15:38:25.062816 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:38:40 crc kubenswrapper[4860]: I1014 15:38:40.061951 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:38:40 crc kubenswrapper[4860]: E1014 15:38:40.062644 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:38:54 crc kubenswrapper[4860]: I1014 15:38:54.062295 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:38:54 crc kubenswrapper[4860]: E1014 15:38:54.062992 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:39:07 crc kubenswrapper[4860]: I1014 15:39:07.062405 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:39:07 crc kubenswrapper[4860]: E1014 15:39:07.063273 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:39:22 crc kubenswrapper[4860]: I1014 15:39:22.062367 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:39:22 crc kubenswrapper[4860]: E1014 15:39:22.063792 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:39:36 crc kubenswrapper[4860]: I1014 15:39:36.062115 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:39:36 crc kubenswrapper[4860]: E1014 15:39:36.063170 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:39:51 crc kubenswrapper[4860]: I1014 15:39:51.062990 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:39:51 crc kubenswrapper[4860]: E1014 15:39:51.064541 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:40:05 crc kubenswrapper[4860]: I1014 15:40:05.061803 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:40:05 crc kubenswrapper[4860]: E1014 15:40:05.062677 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:40:18 crc kubenswrapper[4860]: I1014 15:40:18.062402 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:40:18 crc kubenswrapper[4860]: E1014 15:40:18.063152 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:40:32 crc kubenswrapper[4860]: I1014 15:40:32.061872 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:40:32 crc kubenswrapper[4860]: E1014 15:40:32.062758 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:40:45 crc kubenswrapper[4860]: I1014 15:40:45.061469 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:40:45 crc kubenswrapper[4860]: E1014 15:40:45.062253 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:40:57 crc kubenswrapper[4860]: I1014 15:40:57.061954 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:40:57 crc kubenswrapper[4860]: E1014 15:40:57.062753 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:41:09 crc kubenswrapper[4860]: I1014 15:41:09.067887 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:41:09 crc kubenswrapper[4860]: E1014 15:41:09.068628 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:41:20 crc kubenswrapper[4860]: I1014 15:41:20.061874 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:41:20 crc kubenswrapper[4860]: E1014 15:41:20.062627 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:41:31 crc kubenswrapper[4860]: E1014 15:41:31.609894 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod567e371c_991d_4515_98bf_b17f6573a744.slice/crio-f607888ae7f87a0159f6d268ee77c69f0ce0f8da1e097a53822f07656b772a7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod567e371c_991d_4515_98bf_b17f6573a744.slice/crio-conmon-f607888ae7f87a0159f6d268ee77c69f0ce0f8da1e097a53822f07656b772a7d.scope\": RecentStats: unable to find data in memory cache]" Oct 14 15:41:32 crc kubenswrapper[4860]: I1014 15:41:32.149149 4860 generic.go:334] "Generic (PLEG): container finished" podID="567e371c-991d-4515-98bf-b17f6573a744" containerID="f607888ae7f87a0159f6d268ee77c69f0ce0f8da1e097a53822f07656b772a7d" exitCode=0 Oct 14 15:41:32 crc kubenswrapper[4860]: I1014 15:41:32.149198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" event={"ID":"567e371c-991d-4515-98bf-b17f6573a744","Type":"ContainerDied","Data":"f607888ae7f87a0159f6d268ee77c69f0ce0f8da1e097a53822f07656b772a7d"} Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.565770 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.750984 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-0\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.751250 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-telemetry-combined-ca-bundle\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.751814 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ssh-key\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.751884 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-2\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.752074 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6658s\" (UniqueName: \"kubernetes.io/projected/567e371c-991d-4515-98bf-b17f6573a744-kube-api-access-6658s\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.752102 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-1\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.752123 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-inventory\") pod \"567e371c-991d-4515-98bf-b17f6573a744\" (UID: \"567e371c-991d-4515-98bf-b17f6573a744\") " Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.757214 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567e371c-991d-4515-98bf-b17f6573a744-kube-api-access-6658s" (OuterVolumeSpecName: "kube-api-access-6658s") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "kube-api-access-6658s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.781659 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.782309 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.790540 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.792632 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.793578 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-inventory" (OuterVolumeSpecName: "inventory") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.803350 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "567e371c-991d-4515-98bf-b17f6573a744" (UID: "567e371c-991d-4515-98bf-b17f6573a744"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854105 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6658s\" (UniqueName: \"kubernetes.io/projected/567e371c-991d-4515-98bf-b17f6573a744-kube-api-access-6658s\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854150 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854165 4860 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854177 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854189 4860 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854198 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:33 crc kubenswrapper[4860]: I1014 15:41:33.854205 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/567e371c-991d-4515-98bf-b17f6573a744-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 14 15:41:34 crc kubenswrapper[4860]: I1014 15:41:34.170816 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" event={"ID":"567e371c-991d-4515-98bf-b17f6573a744","Type":"ContainerDied","Data":"e9f7f01a35bd45f61c16acfc12ca5ad6ffb21be37f16f51b06eb876c1f882b83"} Oct 14 15:41:34 crc kubenswrapper[4860]: I1014 15:41:34.170858 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f7f01a35bd45f61c16acfc12ca5ad6ffb21be37f16f51b06eb876c1f882b83" Oct 14 15:41:34 crc kubenswrapper[4860]: I1014 15:41:34.170874 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw" Oct 14 15:41:35 crc kubenswrapper[4860]: I1014 15:41:35.061683 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:41:35 crc kubenswrapper[4860]: E1014 15:41:35.062266 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:41:48 crc kubenswrapper[4860]: I1014 15:41:48.061572 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:41:48 crc kubenswrapper[4860]: E1014 15:41:48.062394 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:41:59 crc kubenswrapper[4860]: I1014 15:41:59.067254 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:41:59 crc kubenswrapper[4860]: E1014 15:41:59.067935 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:42:12 crc kubenswrapper[4860]: I1014 15:42:12.062557 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:42:12 crc kubenswrapper[4860]: I1014 15:42:12.520778 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"df284c3f5fc482d600dcd6fa235f27f82a32f7f2d6fa45712f66ca1ba04e34d2"} Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.787853 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 15:42:35 crc kubenswrapper[4860]: E1014 15:42:35.789226 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567e371c-991d-4515-98bf-b17f6573a744" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.789247 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="567e371c-991d-4515-98bf-b17f6573a744" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.789505 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="567e371c-991d-4515-98bf-b17f6573a744" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.790450 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.792382 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mlp24" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.792605 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.792841 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.793898 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.818135 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.933721 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.933840 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.933873 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.933970 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-config-data\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.934132 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.934210 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.934281 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.934316 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8v28\" (UniqueName: \"kubernetes.io/projected/ccedbfab-f66d-49a5-baac-50c603e57c98-kube-api-access-b8v28\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:35 crc kubenswrapper[4860]: I1014 15:42:35.934425 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036635 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036685 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036706 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-config-data\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036763 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036797 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036834 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8v28\" (UniqueName: \"kubernetes.io/projected/ccedbfab-f66d-49a5-baac-50c603e57c98-kube-api-access-b8v28\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.036986 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.037403 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.037861 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.038122 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.038225 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.038638 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-config-data\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.043610 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.043798 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.044536 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.063965 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8v28\" (UniqueName: \"kubernetes.io/projected/ccedbfab-f66d-49a5-baac-50c603e57c98-kube-api-access-b8v28\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.073446 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.120584 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.643320 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.660965 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:42:36 crc kubenswrapper[4860]: I1014 15:42:36.741175 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ccedbfab-f66d-49a5-baac-50c603e57c98","Type":"ContainerStarted","Data":"89d3f6a58deafe2f41a31cbc357b89f57ac8cb38317cc1b61e15c35b30b20aff"} Oct 14 15:43:21 crc kubenswrapper[4860]: E1014 15:43:21.143968 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 14 15:43:21 crc kubenswrapper[4860]: E1014 15:43:21.149918 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8v28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(ccedbfab-f66d-49a5-baac-50c603e57c98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 15:43:21 crc kubenswrapper[4860]: E1014 15:43:21.151444 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="ccedbfab-f66d-49a5-baac-50c603e57c98" Oct 14 15:43:21 crc kubenswrapper[4860]: E1014 15:43:21.186942 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="ccedbfab-f66d-49a5-baac-50c603e57c98" Oct 14 15:43:37 crc kubenswrapper[4860]: I1014 15:43:37.151680 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 15:43:39 crc kubenswrapper[4860]: I1014 15:43:39.323448 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ccedbfab-f66d-49a5-baac-50c603e57c98","Type":"ContainerStarted","Data":"635c7a3e77d8f3674f0d6a0dded9e523675e9241f6b476d42d5dca41b7fa133d"} Oct 14 15:43:39 crc kubenswrapper[4860]: I1014 15:43:39.349282 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.861934102 podStartE2EDuration="1m5.349265812s" podCreationTimestamp="2025-10-14 15:42:34 +0000 UTC" firstStartedPulling="2025-10-14 15:42:36.660765786 +0000 UTC m=+3218.247549235" lastFinishedPulling="2025-10-14 15:43:37.148097496 +0000 UTC m=+3278.734880945" observedRunningTime="2025-10-14 15:43:39.346687859 +0000 UTC m=+3280.933471308" watchObservedRunningTime="2025-10-14 15:43:39.349265812 +0000 UTC m=+3280.936049261" Oct 14 15:44:29 crc kubenswrapper[4860]: I1014 15:44:29.246057 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:44:29 crc kubenswrapper[4860]: I1014 15:44:29.246772 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:44:59 crc kubenswrapper[4860]: I1014 15:44:59.245586 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:44:59 crc kubenswrapper[4860]: I1014 15:44:59.246186 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.175147 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh"] Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.176721 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.178976 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.179920 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.236494 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh"] Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.336486 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrjd\" (UniqueName: \"kubernetes.io/projected/61f66387-c435-47f4-8ef3-bca1d94e2fdb-kube-api-access-7rrjd\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.336564 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61f66387-c435-47f4-8ef3-bca1d94e2fdb-config-volume\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.336639 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61f66387-c435-47f4-8ef3-bca1d94e2fdb-secret-volume\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.438973 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61f66387-c435-47f4-8ef3-bca1d94e2fdb-secret-volume\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.440474 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrjd\" (UniqueName: \"kubernetes.io/projected/61f66387-c435-47f4-8ef3-bca1d94e2fdb-kube-api-access-7rrjd\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.440575 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61f66387-c435-47f4-8ef3-bca1d94e2fdb-config-volume\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.441636 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61f66387-c435-47f4-8ef3-bca1d94e2fdb-config-volume\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.457714 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61f66387-c435-47f4-8ef3-bca1d94e2fdb-secret-volume\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.460819 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrjd\" (UniqueName: \"kubernetes.io/projected/61f66387-c435-47f4-8ef3-bca1d94e2fdb-kube-api-access-7rrjd\") pod \"collect-profiles-29340945-p99bh\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:00 crc kubenswrapper[4860]: I1014 15:45:00.507687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:01 crc kubenswrapper[4860]: I1014 15:45:01.929530 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh"] Oct 14 15:45:02 crc kubenswrapper[4860]: I1014 15:45:02.064352 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" event={"ID":"61f66387-c435-47f4-8ef3-bca1d94e2fdb","Type":"ContainerStarted","Data":"11bdd199abed2a9a96d809e086a230b91bde947983870e53eac21df99ad54079"} Oct 14 15:45:03 crc kubenswrapper[4860]: I1014 15:45:03.088839 4860 generic.go:334] "Generic (PLEG): container finished" podID="61f66387-c435-47f4-8ef3-bca1d94e2fdb" containerID="b0c1a971f6c3413e8660b3bee1ad2ff090e520085033ba335398b9f02a850bf1" exitCode=0 Oct 14 15:45:03 crc kubenswrapper[4860]: I1014 15:45:03.088918 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" event={"ID":"61f66387-c435-47f4-8ef3-bca1d94e2fdb","Type":"ContainerDied","Data":"b0c1a971f6c3413e8660b3bee1ad2ff090e520085033ba335398b9f02a850bf1"} Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.647464 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.764462 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rrjd\" (UniqueName: \"kubernetes.io/projected/61f66387-c435-47f4-8ef3-bca1d94e2fdb-kube-api-access-7rrjd\") pod \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.764550 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61f66387-c435-47f4-8ef3-bca1d94e2fdb-config-volume\") pod \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.764653 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61f66387-c435-47f4-8ef3-bca1d94e2fdb-secret-volume\") pod \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\" (UID: \"61f66387-c435-47f4-8ef3-bca1d94e2fdb\") " Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.765571 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f66387-c435-47f4-8ef3-bca1d94e2fdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "61f66387-c435-47f4-8ef3-bca1d94e2fdb" (UID: "61f66387-c435-47f4-8ef3-bca1d94e2fdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.783081 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f66387-c435-47f4-8ef3-bca1d94e2fdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "61f66387-c435-47f4-8ef3-bca1d94e2fdb" (UID: "61f66387-c435-47f4-8ef3-bca1d94e2fdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.792841 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f66387-c435-47f4-8ef3-bca1d94e2fdb-kube-api-access-7rrjd" (OuterVolumeSpecName: "kube-api-access-7rrjd") pod "61f66387-c435-47f4-8ef3-bca1d94e2fdb" (UID: "61f66387-c435-47f4-8ef3-bca1d94e2fdb"). InnerVolumeSpecName "kube-api-access-7rrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.867309 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrjd\" (UniqueName: \"kubernetes.io/projected/61f66387-c435-47f4-8ef3-bca1d94e2fdb-kube-api-access-7rrjd\") on node \"crc\" DevicePath \"\"" Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.867344 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61f66387-c435-47f4-8ef3-bca1d94e2fdb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:45:04 crc kubenswrapper[4860]: I1014 15:45:04.867353 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61f66387-c435-47f4-8ef3-bca1d94e2fdb-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 15:45:05 crc kubenswrapper[4860]: I1014 15:45:05.106548 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" event={"ID":"61f66387-c435-47f4-8ef3-bca1d94e2fdb","Type":"ContainerDied","Data":"11bdd199abed2a9a96d809e086a230b91bde947983870e53eac21df99ad54079"} Oct 14 15:45:05 crc kubenswrapper[4860]: I1014 15:45:05.106594 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bdd199abed2a9a96d809e086a230b91bde947983870e53eac21df99ad54079" Oct 14 15:45:05 crc kubenswrapper[4860]: I1014 15:45:05.106658 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340945-p99bh" Oct 14 15:45:05 crc kubenswrapper[4860]: I1014 15:45:05.726423 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7"] Oct 14 15:45:05 crc kubenswrapper[4860]: I1014 15:45:05.745892 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340900-rskw7"] Oct 14 15:45:07 crc kubenswrapper[4860]: I1014 15:45:07.072744 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407016dc-637e-487c-ba77-86b2f4752266" path="/var/lib/kubelet/pods/407016dc-637e-487c-ba77-86b2f4752266/volumes" Oct 14 15:45:18 crc kubenswrapper[4860]: I1014 15:45:18.007210 4860 scope.go:117] "RemoveContainer" containerID="cad0eb55cd201b2a956a9f4753a524f95c0e2329b86e9f96ea4e57f39def2bb5" Oct 14 15:45:29 crc kubenswrapper[4860]: I1014 15:45:29.245821 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:45:29 crc kubenswrapper[4860]: I1014 15:45:29.246429 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:45:29 crc kubenswrapper[4860]: I1014 15:45:29.246485 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:45:29 crc kubenswrapper[4860]: I1014 15:45:29.247278 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df284c3f5fc482d600dcd6fa235f27f82a32f7f2d6fa45712f66ca1ba04e34d2"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:45:29 crc kubenswrapper[4860]: I1014 15:45:29.247335 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://df284c3f5fc482d600dcd6fa235f27f82a32f7f2d6fa45712f66ca1ba04e34d2" gracePeriod=600 Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.030667 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7rqsg"] Oct 14 15:45:30 crc kubenswrapper[4860]: E1014 15:45:30.031440 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f66387-c435-47f4-8ef3-bca1d94e2fdb" containerName="collect-profiles" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.031500 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f66387-c435-47f4-8ef3-bca1d94e2fdb" containerName="collect-profiles" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.031766 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f66387-c435-47f4-8ef3-bca1d94e2fdb" containerName="collect-profiles" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.033471 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.039711 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k872b\" (UniqueName: \"kubernetes.io/projected/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-kube-api-access-k872b\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.039829 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-utilities\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.039856 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-catalog-content\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.045771 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7rqsg"] Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.141693 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-catalog-content\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.141979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k872b\" (UniqueName: \"kubernetes.io/projected/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-kube-api-access-k872b\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.142068 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-utilities\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.142220 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-catalog-content\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.142407 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-utilities\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.179131 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k872b\" (UniqueName: \"kubernetes.io/projected/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-kube-api-access-k872b\") pod \"community-operators-7rqsg\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.317198 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="df284c3f5fc482d600dcd6fa235f27f82a32f7f2d6fa45712f66ca1ba04e34d2" exitCode=0 Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.317522 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"df284c3f5fc482d600dcd6fa235f27f82a32f7f2d6fa45712f66ca1ba04e34d2"} Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.317547 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad"} Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.317563 4860 scope.go:117] "RemoveContainer" containerID="e50211d387dea3714cf85984203f451dc37b891f5cd643e4f8e6f82ce5804713" Oct 14 15:45:30 crc kubenswrapper[4860]: I1014 15:45:30.366834 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:31 crc kubenswrapper[4860]: I1014 15:45:31.272345 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7rqsg"] Oct 14 15:45:31 crc kubenswrapper[4860]: I1014 15:45:31.329624 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerStarted","Data":"c79f10f443876042f825766e82f26e0eca28dd357b5405d01de861c5b9b5fae6"} Oct 14 15:45:32 crc kubenswrapper[4860]: I1014 15:45:32.343137 4860 generic.go:334] "Generic (PLEG): container finished" podID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerID="082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7" exitCode=0 Oct 14 15:45:32 crc kubenswrapper[4860]: I1014 15:45:32.343220 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerDied","Data":"082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7"} Oct 14 15:45:33 crc kubenswrapper[4860]: I1014 15:45:33.353920 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerStarted","Data":"c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0"} Oct 14 15:45:36 crc kubenswrapper[4860]: I1014 15:45:36.406550 4860 generic.go:334] "Generic (PLEG): container finished" podID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerID="c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0" exitCode=0 Oct 14 15:45:36 crc kubenswrapper[4860]: I1014 15:45:36.406629 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerDied","Data":"c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0"} Oct 14 15:45:37 crc kubenswrapper[4860]: I1014 15:45:37.418641 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerStarted","Data":"36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d"} Oct 14 15:45:38 crc kubenswrapper[4860]: I1014 15:45:38.445856 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7rqsg" podStartSLOduration=4.74410809 podStartE2EDuration="9.445834543s" podCreationTimestamp="2025-10-14 15:45:29 +0000 UTC" firstStartedPulling="2025-10-14 15:45:32.34509176 +0000 UTC m=+3393.931875209" lastFinishedPulling="2025-10-14 15:45:37.046818213 +0000 UTC m=+3398.633601662" observedRunningTime="2025-10-14 15:45:38.440642007 +0000 UTC m=+3400.027425456" watchObservedRunningTime="2025-10-14 15:45:38.445834543 +0000 UTC m=+3400.032617992" Oct 14 15:45:40 crc kubenswrapper[4860]: I1014 15:45:40.367978 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:40 crc kubenswrapper[4860]: I1014 15:45:40.368375 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:40 crc kubenswrapper[4860]: I1014 15:45:40.420324 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:50 crc kubenswrapper[4860]: I1014 15:45:50.419241 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:50 crc kubenswrapper[4860]: I1014 15:45:50.467218 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7rqsg"] Oct 14 15:45:50 crc kubenswrapper[4860]: I1014 15:45:50.526344 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7rqsg" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="registry-server" containerID="cri-o://36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d" gracePeriod=2 Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.314136 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.481889 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-catalog-content\") pod \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.482065 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k872b\" (UniqueName: \"kubernetes.io/projected/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-kube-api-access-k872b\") pod \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.482135 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-utilities\") pod \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\" (UID: \"660ca41c-fd64-4c3a-ae85-384aa52d4ffb\") " Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.483221 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-utilities" (OuterVolumeSpecName: "utilities") pod "660ca41c-fd64-4c3a-ae85-384aa52d4ffb" (UID: "660ca41c-fd64-4c3a-ae85-384aa52d4ffb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.491673 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-kube-api-access-k872b" (OuterVolumeSpecName: "kube-api-access-k872b") pod "660ca41c-fd64-4c3a-ae85-384aa52d4ffb" (UID: "660ca41c-fd64-4c3a-ae85-384aa52d4ffb"). InnerVolumeSpecName "kube-api-access-k872b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.534498 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "660ca41c-fd64-4c3a-ae85-384aa52d4ffb" (UID: "660ca41c-fd64-4c3a-ae85-384aa52d4ffb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.538898 4860 generic.go:334] "Generic (PLEG): container finished" podID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerID="36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d" exitCode=0 Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.538942 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerDied","Data":"36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d"} Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.538968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rqsg" event={"ID":"660ca41c-fd64-4c3a-ae85-384aa52d4ffb","Type":"ContainerDied","Data":"c79f10f443876042f825766e82f26e0eca28dd357b5405d01de861c5b9b5fae6"} Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.538981 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rqsg" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.538995 4860 scope.go:117] "RemoveContainer" containerID="36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.564653 4860 scope.go:117] "RemoveContainer" containerID="c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.582094 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7rqsg"] Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.590652 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7rqsg"] Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.597435 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.597469 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k872b\" (UniqueName: \"kubernetes.io/projected/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-kube-api-access-k872b\") on node \"crc\" DevicePath \"\"" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.597481 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660ca41c-fd64-4c3a-ae85-384aa52d4ffb-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.625091 4860 scope.go:117] "RemoveContainer" containerID="082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.647853 4860 scope.go:117] "RemoveContainer" containerID="36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d" Oct 14 15:45:51 crc kubenswrapper[4860]: E1014 15:45:51.648646 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d\": container with ID starting with 36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d not found: ID does not exist" containerID="36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.648678 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d"} err="failed to get container status \"36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d\": rpc error: code = NotFound desc = could not find container \"36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d\": container with ID starting with 36291d3774dc76edb81fdd53ac7d8dfe6a712345b257c92641a8a566c925685d not found: ID does not exist" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.648698 4860 scope.go:117] "RemoveContainer" containerID="c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0" Oct 14 15:45:51 crc kubenswrapper[4860]: E1014 15:45:51.649025 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0\": container with ID starting with c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0 not found: ID does not exist" containerID="c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.649152 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0"} err="failed to get container status \"c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0\": rpc error: code = NotFound desc = could not find container \"c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0\": container with ID starting with c5c66b4eab2e4c6afa2fd6203baefefcbd8651f469147be1eeb78044070e0ff0 not found: ID does not exist" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.649165 4860 scope.go:117] "RemoveContainer" containerID="082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7" Oct 14 15:45:51 crc kubenswrapper[4860]: E1014 15:45:51.649491 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7\": container with ID starting with 082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7 not found: ID does not exist" containerID="082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7" Oct 14 15:45:51 crc kubenswrapper[4860]: I1014 15:45:51.649515 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7"} err="failed to get container status \"082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7\": rpc error: code = NotFound desc = could not find container \"082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7\": container with ID starting with 082172b75e966948c2478a2da7b820c9363a04da630a9eba71edc612b6cfaee7 not found: ID does not exist" Oct 14 15:45:53 crc kubenswrapper[4860]: I1014 15:45:53.072189 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" path="/var/lib/kubelet/pods/660ca41c-fd64-4c3a-ae85-384aa52d4ffb/volumes" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.055868 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdfqk"] Oct 14 15:46:09 crc kubenswrapper[4860]: E1014 15:46:09.056908 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="extract-content" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.056923 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="extract-content" Oct 14 15:46:09 crc kubenswrapper[4860]: E1014 15:46:09.056946 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="registry-server" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.056952 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="registry-server" Oct 14 15:46:09 crc kubenswrapper[4860]: E1014 15:46:09.056970 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="extract-utilities" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.056976 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="extract-utilities" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.057378 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="660ca41c-fd64-4c3a-ae85-384aa52d4ffb" containerName="registry-server" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.058669 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.080814 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdfqk"] Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.140630 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp76d\" (UniqueName: \"kubernetes.io/projected/8625f4c2-285a-4443-9092-56fbd38207d7-kube-api-access-wp76d\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.140713 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-utilities\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.140770 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-catalog-content\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.241814 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-utilities\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.242161 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-catalog-content\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.242358 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp76d\" (UniqueName: \"kubernetes.io/projected/8625f4c2-285a-4443-9092-56fbd38207d7-kube-api-access-wp76d\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.242540 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-utilities\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.242540 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-catalog-content\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.270917 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp76d\" (UniqueName: \"kubernetes.io/projected/8625f4c2-285a-4443-9092-56fbd38207d7-kube-api-access-wp76d\") pod \"redhat-operators-sdfqk\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:09 crc kubenswrapper[4860]: I1014 15:46:09.377606 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:10 crc kubenswrapper[4860]: I1014 15:46:09.923514 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdfqk"] Oct 14 15:46:10 crc kubenswrapper[4860]: I1014 15:46:10.739714 4860 generic.go:334] "Generic (PLEG): container finished" podID="8625f4c2-285a-4443-9092-56fbd38207d7" containerID="2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b" exitCode=0 Oct 14 15:46:10 crc kubenswrapper[4860]: I1014 15:46:10.739873 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerDied","Data":"2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b"} Oct 14 15:46:10 crc kubenswrapper[4860]: I1014 15:46:10.740062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerStarted","Data":"b1080db8af7b6715c159b6a63598a6b70dceae0c843262e96e591682a4cb7c3d"} Oct 14 15:46:11 crc kubenswrapper[4860]: I1014 15:46:11.749708 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerStarted","Data":"a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f"} Oct 14 15:46:16 crc kubenswrapper[4860]: I1014 15:46:16.803296 4860 generic.go:334] "Generic (PLEG): container finished" podID="8625f4c2-285a-4443-9092-56fbd38207d7" containerID="a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f" exitCode=0 Oct 14 15:46:16 crc kubenswrapper[4860]: I1014 15:46:16.803365 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerDied","Data":"a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f"} Oct 14 15:46:17 crc kubenswrapper[4860]: I1014 15:46:17.816167 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerStarted","Data":"850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f"} Oct 14 15:46:17 crc kubenswrapper[4860]: I1014 15:46:17.835315 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdfqk" podStartSLOduration=2.317146473 podStartE2EDuration="8.835291194s" podCreationTimestamp="2025-10-14 15:46:09 +0000 UTC" firstStartedPulling="2025-10-14 15:46:10.742980203 +0000 UTC m=+3432.329763652" lastFinishedPulling="2025-10-14 15:46:17.261124924 +0000 UTC m=+3438.847908373" observedRunningTime="2025-10-14 15:46:17.830388106 +0000 UTC m=+3439.417171565" watchObservedRunningTime="2025-10-14 15:46:17.835291194 +0000 UTC m=+3439.422074653" Oct 14 15:46:19 crc kubenswrapper[4860]: I1014 15:46:19.378868 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:19 crc kubenswrapper[4860]: I1014 15:46:19.379205 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:20 crc kubenswrapper[4860]: I1014 15:46:20.430086 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sdfqk" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="registry-server" probeResult="failure" output=< Oct 14 15:46:20 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:46:20 crc kubenswrapper[4860]: > Oct 14 15:46:29 crc kubenswrapper[4860]: I1014 15:46:29.433331 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:29 crc kubenswrapper[4860]: I1014 15:46:29.493800 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:29 crc kubenswrapper[4860]: I1014 15:46:29.681076 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdfqk"] Oct 14 15:46:30 crc kubenswrapper[4860]: I1014 15:46:30.934756 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdfqk" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="registry-server" containerID="cri-o://850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f" gracePeriod=2 Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.629638 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.781451 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp76d\" (UniqueName: \"kubernetes.io/projected/8625f4c2-285a-4443-9092-56fbd38207d7-kube-api-access-wp76d\") pod \"8625f4c2-285a-4443-9092-56fbd38207d7\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.781741 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-utilities\") pod \"8625f4c2-285a-4443-9092-56fbd38207d7\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.781805 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-catalog-content\") pod \"8625f4c2-285a-4443-9092-56fbd38207d7\" (UID: \"8625f4c2-285a-4443-9092-56fbd38207d7\") " Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.782545 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-utilities" (OuterVolumeSpecName: "utilities") pod "8625f4c2-285a-4443-9092-56fbd38207d7" (UID: "8625f4c2-285a-4443-9092-56fbd38207d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.797256 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8625f4c2-285a-4443-9092-56fbd38207d7-kube-api-access-wp76d" (OuterVolumeSpecName: "kube-api-access-wp76d") pod "8625f4c2-285a-4443-9092-56fbd38207d7" (UID: "8625f4c2-285a-4443-9092-56fbd38207d7"). InnerVolumeSpecName "kube-api-access-wp76d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.879190 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8625f4c2-285a-4443-9092-56fbd38207d7" (UID: "8625f4c2-285a-4443-9092-56fbd38207d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.884280 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.884306 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp76d\" (UniqueName: \"kubernetes.io/projected/8625f4c2-285a-4443-9092-56fbd38207d7-kube-api-access-wp76d\") on node \"crc\" DevicePath \"\"" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.884321 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8625f4c2-285a-4443-9092-56fbd38207d7-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.954505 4860 generic.go:334] "Generic (PLEG): container finished" podID="8625f4c2-285a-4443-9092-56fbd38207d7" containerID="850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f" exitCode=0 Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.954556 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerDied","Data":"850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f"} Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.954590 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfqk" event={"ID":"8625f4c2-285a-4443-9092-56fbd38207d7","Type":"ContainerDied","Data":"b1080db8af7b6715c159b6a63598a6b70dceae0c843262e96e591682a4cb7c3d"} Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.954652 4860 scope.go:117] "RemoveContainer" containerID="850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.954844 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfqk" Oct 14 15:46:31 crc kubenswrapper[4860]: I1014 15:46:31.985095 4860 scope.go:117] "RemoveContainer" containerID="a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.000708 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdfqk"] Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.015725 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdfqk"] Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.083660 4860 scope.go:117] "RemoveContainer" containerID="2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.100421 4860 scope.go:117] "RemoveContainer" containerID="850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f" Oct 14 15:46:32 crc kubenswrapper[4860]: E1014 15:46:32.101006 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f\": container with ID starting with 850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f not found: ID does not exist" containerID="850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.101056 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f"} err="failed to get container status \"850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f\": rpc error: code = NotFound desc = could not find container \"850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f\": container with ID starting with 850a61ede2bc5fc87f8b7e23297f4093089784c18a68480fc95b05385fe1af4f not found: ID does not exist" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.101082 4860 scope.go:117] "RemoveContainer" containerID="a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f" Oct 14 15:46:32 crc kubenswrapper[4860]: E1014 15:46:32.102199 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f\": container with ID starting with a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f not found: ID does not exist" containerID="a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.102224 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f"} err="failed to get container status \"a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f\": rpc error: code = NotFound desc = could not find container \"a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f\": container with ID starting with a392b5b7cd36072ac78baa138609ee6a4a909b2c8c346f4e0fbc06fe3d1dff8f not found: ID does not exist" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.102238 4860 scope.go:117] "RemoveContainer" containerID="2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b" Oct 14 15:46:32 crc kubenswrapper[4860]: E1014 15:46:32.102990 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b\": container with ID starting with 2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b not found: ID does not exist" containerID="2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b" Oct 14 15:46:32 crc kubenswrapper[4860]: I1014 15:46:32.103018 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b"} err="failed to get container status \"2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b\": rpc error: code = NotFound desc = could not find container \"2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b\": container with ID starting with 2c30cd7100e94c64a94b4de28723108636e9ff5f55837c007ed76dfd0d44c36b not found: ID does not exist" Oct 14 15:46:33 crc kubenswrapper[4860]: I1014 15:46:33.072793 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" path="/var/lib/kubelet/pods/8625f4c2-285a-4443-9092-56fbd38207d7/volumes" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.883829 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tzqkg"] Oct 14 15:46:58 crc kubenswrapper[4860]: E1014 15:46:58.886974 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="registry-server" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.887112 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="registry-server" Oct 14 15:46:58 crc kubenswrapper[4860]: E1014 15:46:58.887234 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="extract-utilities" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.887370 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="extract-utilities" Oct 14 15:46:58 crc kubenswrapper[4860]: E1014 15:46:58.887491 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="extract-content" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.887566 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="extract-content" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.887914 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8625f4c2-285a-4443-9092-56fbd38207d7" containerName="registry-server" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.889864 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:58 crc kubenswrapper[4860]: I1014 15:46:58.897337 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzqkg"] Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.013750 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5csx7\" (UniqueName: \"kubernetes.io/projected/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-kube-api-access-5csx7\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.014118 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-catalog-content\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.014148 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-utilities\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.116135 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5csx7\" (UniqueName: \"kubernetes.io/projected/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-kube-api-access-5csx7\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.116225 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-catalog-content\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.116264 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-utilities\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.116676 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-utilities\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.116754 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-catalog-content\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.150099 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5csx7\" (UniqueName: \"kubernetes.io/projected/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-kube-api-access-5csx7\") pod \"redhat-marketplace-tzqkg\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.210516 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:46:59 crc kubenswrapper[4860]: I1014 15:46:59.749518 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzqkg"] Oct 14 15:47:00 crc kubenswrapper[4860]: I1014 15:47:00.186626 4860 generic.go:334] "Generic (PLEG): container finished" podID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerID="b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e" exitCode=0 Oct 14 15:47:00 crc kubenswrapper[4860]: I1014 15:47:00.187252 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerDied","Data":"b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e"} Oct 14 15:47:00 crc kubenswrapper[4860]: I1014 15:47:00.187287 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerStarted","Data":"78b81a643a950302b5daa6e0661a5c25c79e501d304fd0749513142c461ceb9d"} Oct 14 15:47:01 crc kubenswrapper[4860]: I1014 15:47:01.197723 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerStarted","Data":"cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c"} Oct 14 15:47:02 crc kubenswrapper[4860]: I1014 15:47:02.207352 4860 generic.go:334] "Generic (PLEG): container finished" podID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerID="cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c" exitCode=0 Oct 14 15:47:02 crc kubenswrapper[4860]: I1014 15:47:02.207402 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerDied","Data":"cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c"} Oct 14 15:47:03 crc kubenswrapper[4860]: I1014 15:47:03.219062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerStarted","Data":"851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a"} Oct 14 15:47:03 crc kubenswrapper[4860]: I1014 15:47:03.241935 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tzqkg" podStartSLOduration=2.833353821 podStartE2EDuration="5.241919128s" podCreationTimestamp="2025-10-14 15:46:58 +0000 UTC" firstStartedPulling="2025-10-14 15:47:00.18835594 +0000 UTC m=+3481.775139389" lastFinishedPulling="2025-10-14 15:47:02.596921247 +0000 UTC m=+3484.183704696" observedRunningTime="2025-10-14 15:47:03.23988634 +0000 UTC m=+3484.826669789" watchObservedRunningTime="2025-10-14 15:47:03.241919128 +0000 UTC m=+3484.828702577" Oct 14 15:47:09 crc kubenswrapper[4860]: I1014 15:47:09.211167 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:47:09 crc kubenswrapper[4860]: I1014 15:47:09.211681 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:47:09 crc kubenswrapper[4860]: I1014 15:47:09.263510 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:47:09 crc kubenswrapper[4860]: I1014 15:47:09.328672 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:47:09 crc kubenswrapper[4860]: I1014 15:47:09.499848 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzqkg"] Oct 14 15:47:11 crc kubenswrapper[4860]: I1014 15:47:11.290945 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tzqkg" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="registry-server" containerID="cri-o://851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a" gracePeriod=2 Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.002978 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.047957 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-utilities\") pod \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.048121 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5csx7\" (UniqueName: \"kubernetes.io/projected/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-kube-api-access-5csx7\") pod \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.048271 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-catalog-content\") pod \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\" (UID: \"c1c9fbc6-eaf4-4c51-8aff-b10616130d48\") " Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.048742 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-utilities" (OuterVolumeSpecName: "utilities") pod "c1c9fbc6-eaf4-4c51-8aff-b10616130d48" (UID: "c1c9fbc6-eaf4-4c51-8aff-b10616130d48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.048858 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.053265 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-kube-api-access-5csx7" (OuterVolumeSpecName: "kube-api-access-5csx7") pod "c1c9fbc6-eaf4-4c51-8aff-b10616130d48" (UID: "c1c9fbc6-eaf4-4c51-8aff-b10616130d48"). InnerVolumeSpecName "kube-api-access-5csx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.063375 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1c9fbc6-eaf4-4c51-8aff-b10616130d48" (UID: "c1c9fbc6-eaf4-4c51-8aff-b10616130d48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.151176 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.151211 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5csx7\" (UniqueName: \"kubernetes.io/projected/c1c9fbc6-eaf4-4c51-8aff-b10616130d48-kube-api-access-5csx7\") on node \"crc\" DevicePath \"\"" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.301186 4860 generic.go:334] "Generic (PLEG): container finished" podID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerID="851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a" exitCode=0 Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.301226 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerDied","Data":"851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a"} Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.301254 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzqkg" event={"ID":"c1c9fbc6-eaf4-4c51-8aff-b10616130d48","Type":"ContainerDied","Data":"78b81a643a950302b5daa6e0661a5c25c79e501d304fd0749513142c461ceb9d"} Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.301271 4860 scope.go:117] "RemoveContainer" containerID="851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.301266 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzqkg" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.322949 4860 scope.go:117] "RemoveContainer" containerID="cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.334160 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzqkg"] Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.350258 4860 scope.go:117] "RemoveContainer" containerID="b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.351594 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzqkg"] Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.398868 4860 scope.go:117] "RemoveContainer" containerID="851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a" Oct 14 15:47:12 crc kubenswrapper[4860]: E1014 15:47:12.402332 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a\": container with ID starting with 851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a not found: ID does not exist" containerID="851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.402381 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a"} err="failed to get container status \"851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a\": rpc error: code = NotFound desc = could not find container \"851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a\": container with ID starting with 851f881555b4fbee324c29a21cdecd7b72bfaff52f00376ee93141700c4e3a8a not found: ID does not exist" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.402411 4860 scope.go:117] "RemoveContainer" containerID="cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c" Oct 14 15:47:12 crc kubenswrapper[4860]: E1014 15:47:12.404412 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c\": container with ID starting with cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c not found: ID does not exist" containerID="cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.404472 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c"} err="failed to get container status \"cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c\": rpc error: code = NotFound desc = could not find container \"cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c\": container with ID starting with cb99b18e3d8b584ce8bee8996fcc62c11cd9f34ff68afd8979f90ff231aba80c not found: ID does not exist" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.404498 4860 scope.go:117] "RemoveContainer" containerID="b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e" Oct 14 15:47:12 crc kubenswrapper[4860]: E1014 15:47:12.405208 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e\": container with ID starting with b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e not found: ID does not exist" containerID="b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e" Oct 14 15:47:12 crc kubenswrapper[4860]: I1014 15:47:12.405236 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e"} err="failed to get container status \"b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e\": rpc error: code = NotFound desc = could not find container \"b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e\": container with ID starting with b4137479ce94248a7aa94bbb5690ab4cfb0ab2e7540165ae884bf220dc1c4d5e not found: ID does not exist" Oct 14 15:47:13 crc kubenswrapper[4860]: I1014 15:47:13.072771 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" path="/var/lib/kubelet/pods/c1c9fbc6-eaf4-4c51-8aff-b10616130d48/volumes" Oct 14 15:47:29 crc kubenswrapper[4860]: I1014 15:47:29.246320 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:47:29 crc kubenswrapper[4860]: I1014 15:47:29.246884 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:47:59 crc kubenswrapper[4860]: I1014 15:47:59.246079 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:47:59 crc kubenswrapper[4860]: I1014 15:47:59.246790 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:48:29 crc kubenswrapper[4860]: I1014 15:48:29.245382 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:48:29 crc kubenswrapper[4860]: I1014 15:48:29.245915 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:48:29 crc kubenswrapper[4860]: I1014 15:48:29.245958 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:48:29 crc kubenswrapper[4860]: I1014 15:48:29.246634 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:48:29 crc kubenswrapper[4860]: I1014 15:48:29.246684 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" gracePeriod=600 Oct 14 15:48:29 crc kubenswrapper[4860]: E1014 15:48:29.376042 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:48:30 crc kubenswrapper[4860]: I1014 15:48:30.027315 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" exitCode=0 Oct 14 15:48:30 crc kubenswrapper[4860]: I1014 15:48:30.027381 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad"} Oct 14 15:48:30 crc kubenswrapper[4860]: I1014 15:48:30.027872 4860 scope.go:117] "RemoveContainer" containerID="df284c3f5fc482d600dcd6fa235f27f82a32f7f2d6fa45712f66ca1ba04e34d2" Oct 14 15:48:30 crc kubenswrapper[4860]: I1014 15:48:30.028843 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:48:30 crc kubenswrapper[4860]: E1014 15:48:30.029172 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:48:41 crc kubenswrapper[4860]: I1014 15:48:41.061765 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:48:41 crc kubenswrapper[4860]: E1014 15:48:41.062704 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:48:55 crc kubenswrapper[4860]: I1014 15:48:55.062173 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:48:55 crc kubenswrapper[4860]: E1014 15:48:55.062901 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:49:07 crc kubenswrapper[4860]: I1014 15:49:07.061305 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:49:07 crc kubenswrapper[4860]: E1014 15:49:07.062169 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:49:20 crc kubenswrapper[4860]: I1014 15:49:20.062543 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:49:20 crc kubenswrapper[4860]: E1014 15:49:20.063356 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:49:33 crc kubenswrapper[4860]: I1014 15:49:33.061912 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:49:33 crc kubenswrapper[4860]: E1014 15:49:33.062874 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:49:46 crc kubenswrapper[4860]: I1014 15:49:46.061910 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:49:46 crc kubenswrapper[4860]: E1014 15:49:46.062989 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:49:59 crc kubenswrapper[4860]: I1014 15:49:59.073994 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:49:59 crc kubenswrapper[4860]: E1014 15:49:59.075587 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:50:13 crc kubenswrapper[4860]: I1014 15:50:13.061445 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:50:13 crc kubenswrapper[4860]: E1014 15:50:13.062179 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:50:28 crc kubenswrapper[4860]: I1014 15:50:28.061859 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:50:28 crc kubenswrapper[4860]: E1014 15:50:28.062765 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:50:39 crc kubenswrapper[4860]: I1014 15:50:39.067945 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:50:39 crc kubenswrapper[4860]: E1014 15:50:39.068828 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:50:51 crc kubenswrapper[4860]: I1014 15:50:51.062009 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:50:51 crc kubenswrapper[4860]: E1014 15:50:51.063046 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:51:05 crc kubenswrapper[4860]: I1014 15:51:05.061717 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:51:05 crc kubenswrapper[4860]: E1014 15:51:05.062498 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.623600 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48p8c"] Oct 14 15:51:19 crc kubenswrapper[4860]: E1014 15:51:19.624520 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="extract-content" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.624533 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="extract-content" Oct 14 15:51:19 crc kubenswrapper[4860]: E1014 15:51:19.624557 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="extract-utilities" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.624563 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="extract-utilities" Oct 14 15:51:19 crc kubenswrapper[4860]: E1014 15:51:19.624579 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="registry-server" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.624586 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="registry-server" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.624765 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c9fbc6-eaf4-4c51-8aff-b10616130d48" containerName="registry-server" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.626070 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.654960 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48p8c"] Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.806524 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894nb\" (UniqueName: \"kubernetes.io/projected/7547b7d4-7dbb-4f07-a064-8862a12c572c-kube-api-access-894nb\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.806605 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7547b7d4-7dbb-4f07-a064-8862a12c572c-utilities\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.807102 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7547b7d4-7dbb-4f07-a064-8862a12c572c-catalog-content\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.909046 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894nb\" (UniqueName: \"kubernetes.io/projected/7547b7d4-7dbb-4f07-a064-8862a12c572c-kube-api-access-894nb\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.909118 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7547b7d4-7dbb-4f07-a064-8862a12c572c-utilities\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.909211 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7547b7d4-7dbb-4f07-a064-8862a12c572c-catalog-content\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.909733 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7547b7d4-7dbb-4f07-a064-8862a12c572c-catalog-content\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.909962 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7547b7d4-7dbb-4f07-a064-8862a12c572c-utilities\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.930239 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894nb\" (UniqueName: \"kubernetes.io/projected/7547b7d4-7dbb-4f07-a064-8862a12c572c-kube-api-access-894nb\") pod \"certified-operators-48p8c\" (UID: \"7547b7d4-7dbb-4f07-a064-8862a12c572c\") " pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:19 crc kubenswrapper[4860]: I1014 15:51:19.950583 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:20 crc kubenswrapper[4860]: I1014 15:51:20.065577 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:51:20 crc kubenswrapper[4860]: E1014 15:51:20.065774 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:51:20 crc kubenswrapper[4860]: I1014 15:51:20.491808 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48p8c"] Oct 14 15:51:20 crc kubenswrapper[4860]: I1014 15:51:20.652869 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48p8c" event={"ID":"7547b7d4-7dbb-4f07-a064-8862a12c572c","Type":"ContainerStarted","Data":"c6be0c2e677668ee02246cf655673d54bd68d813b7297904730a34aac4a50aed"} Oct 14 15:51:21 crc kubenswrapper[4860]: I1014 15:51:21.662610 4860 generic.go:334] "Generic (PLEG): container finished" podID="7547b7d4-7dbb-4f07-a064-8862a12c572c" containerID="e83ad80768f2152244ed95da5e46be0f8ac571acefd92a79c6bed2ad18d3c4a2" exitCode=0 Oct 14 15:51:21 crc kubenswrapper[4860]: I1014 15:51:21.663619 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48p8c" event={"ID":"7547b7d4-7dbb-4f07-a064-8862a12c572c","Type":"ContainerDied","Data":"e83ad80768f2152244ed95da5e46be0f8ac571acefd92a79c6bed2ad18d3c4a2"} Oct 14 15:51:21 crc kubenswrapper[4860]: I1014 15:51:21.664882 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:51:28 crc kubenswrapper[4860]: I1014 15:51:28.751164 4860 generic.go:334] "Generic (PLEG): container finished" podID="7547b7d4-7dbb-4f07-a064-8862a12c572c" containerID="efd60f09b8b787dc773f40087874ba82afa7da1166cfb2a1091dbfc5aae7d769" exitCode=0 Oct 14 15:51:28 crc kubenswrapper[4860]: I1014 15:51:28.751372 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48p8c" event={"ID":"7547b7d4-7dbb-4f07-a064-8862a12c572c","Type":"ContainerDied","Data":"efd60f09b8b787dc773f40087874ba82afa7da1166cfb2a1091dbfc5aae7d769"} Oct 14 15:51:30 crc kubenswrapper[4860]: I1014 15:51:30.771402 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48p8c" event={"ID":"7547b7d4-7dbb-4f07-a064-8862a12c572c","Type":"ContainerStarted","Data":"468b9539d70c729261652c9eeb96c3877b69c803da243da1dd73b61b85f39be9"} Oct 14 15:51:30 crc kubenswrapper[4860]: I1014 15:51:30.791859 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48p8c" podStartSLOduration=3.706974404 podStartE2EDuration="11.791836243s" podCreationTimestamp="2025-10-14 15:51:19 +0000 UTC" firstStartedPulling="2025-10-14 15:51:21.664566615 +0000 UTC m=+3743.251350064" lastFinishedPulling="2025-10-14 15:51:29.749428464 +0000 UTC m=+3751.336211903" observedRunningTime="2025-10-14 15:51:30.78759679 +0000 UTC m=+3752.374380259" watchObservedRunningTime="2025-10-14 15:51:30.791836243 +0000 UTC m=+3752.378619692" Oct 14 15:51:31 crc kubenswrapper[4860]: I1014 15:51:31.061851 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:51:31 crc kubenswrapper[4860]: E1014 15:51:31.062580 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:51:39 crc kubenswrapper[4860]: I1014 15:51:39.951578 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:39 crc kubenswrapper[4860]: I1014 15:51:39.952270 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:40 crc kubenswrapper[4860]: I1014 15:51:40.996281 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-48p8c" podUID="7547b7d4-7dbb-4f07-a064-8862a12c572c" containerName="registry-server" probeResult="failure" output=< Oct 14 15:51:40 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:51:40 crc kubenswrapper[4860]: > Oct 14 15:51:45 crc kubenswrapper[4860]: I1014 15:51:45.063393 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:51:45 crc kubenswrapper[4860]: E1014 15:51:45.065253 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:51:50 crc kubenswrapper[4860]: I1014 15:51:50.005123 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:50 crc kubenswrapper[4860]: I1014 15:51:50.052609 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48p8c" Oct 14 15:51:50 crc kubenswrapper[4860]: I1014 15:51:50.874191 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48p8c"] Oct 14 15:51:50 crc kubenswrapper[4860]: I1014 15:51:50.941478 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vt7nl"] Oct 14 15:51:50 crc kubenswrapper[4860]: I1014 15:51:50.943061 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vt7nl" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="registry-server" containerID="cri-o://2f373eab8746489647fb05711af93ac47c036366ced508e76678947911e58ac0" gracePeriod=2 Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.008908 4860 generic.go:334] "Generic (PLEG): container finished" podID="699e6482-e421-4a0a-b00e-8378366000ba" containerID="2f373eab8746489647fb05711af93ac47c036366ced508e76678947911e58ac0" exitCode=0 Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.011004 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt7nl" event={"ID":"699e6482-e421-4a0a-b00e-8378366000ba","Type":"ContainerDied","Data":"2f373eab8746489647fb05711af93ac47c036366ced508e76678947911e58ac0"} Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.206745 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.366819 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-catalog-content\") pod \"699e6482-e421-4a0a-b00e-8378366000ba\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.366882 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xx42\" (UniqueName: \"kubernetes.io/projected/699e6482-e421-4a0a-b00e-8378366000ba-kube-api-access-4xx42\") pod \"699e6482-e421-4a0a-b00e-8378366000ba\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.366974 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-utilities\") pod \"699e6482-e421-4a0a-b00e-8378366000ba\" (UID: \"699e6482-e421-4a0a-b00e-8378366000ba\") " Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.367730 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-utilities" (OuterVolumeSpecName: "utilities") pod "699e6482-e421-4a0a-b00e-8378366000ba" (UID: "699e6482-e421-4a0a-b00e-8378366000ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.386870 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699e6482-e421-4a0a-b00e-8378366000ba-kube-api-access-4xx42" (OuterVolumeSpecName: "kube-api-access-4xx42") pod "699e6482-e421-4a0a-b00e-8378366000ba" (UID: "699e6482-e421-4a0a-b00e-8378366000ba"). InnerVolumeSpecName "kube-api-access-4xx42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.440987 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "699e6482-e421-4a0a-b00e-8378366000ba" (UID: "699e6482-e421-4a0a-b00e-8378366000ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.469399 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.469430 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xx42\" (UniqueName: \"kubernetes.io/projected/699e6482-e421-4a0a-b00e-8378366000ba-kube-api-access-4xx42\") on node \"crc\" DevicePath \"\"" Oct 14 15:51:52 crc kubenswrapper[4860]: I1014 15:51:52.469441 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/699e6482-e421-4a0a-b00e-8378366000ba-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.020726 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt7nl" event={"ID":"699e6482-e421-4a0a-b00e-8378366000ba","Type":"ContainerDied","Data":"be7339d30bf28bb05de2b2276e4ee58a7bb0da3adaf68cd770f3fd919e2879d5"} Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.020794 4860 scope.go:117] "RemoveContainer" containerID="2f373eab8746489647fb05711af93ac47c036366ced508e76678947911e58ac0" Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.020800 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt7nl" Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.049241 4860 scope.go:117] "RemoveContainer" containerID="82b4b215cb77229247ea9d46fd7e74ad03d52523959246e089e69a3d855f5e27" Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.053208 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vt7nl"] Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.084764 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vt7nl"] Oct 14 15:51:53 crc kubenswrapper[4860]: I1014 15:51:53.088532 4860 scope.go:117] "RemoveContainer" containerID="25f52b080dff04f223099e23f8661a1aef466397fa3584c9c87df7a3b35fbd06" Oct 14 15:51:55 crc kubenswrapper[4860]: I1014 15:51:55.072739 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699e6482-e421-4a0a-b00e-8378366000ba" path="/var/lib/kubelet/pods/699e6482-e421-4a0a-b00e-8378366000ba/volumes" Oct 14 15:51:57 crc kubenswrapper[4860]: I1014 15:51:57.061880 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:51:57 crc kubenswrapper[4860]: E1014 15:51:57.062188 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:52:10 crc kubenswrapper[4860]: I1014 15:52:10.062202 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:52:10 crc kubenswrapper[4860]: E1014 15:52:10.063211 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:52:25 crc kubenswrapper[4860]: I1014 15:52:25.062134 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:52:25 crc kubenswrapper[4860]: E1014 15:52:25.062939 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:52:39 crc kubenswrapper[4860]: I1014 15:52:39.075451 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:52:39 crc kubenswrapper[4860]: E1014 15:52:39.076397 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:52:54 crc kubenswrapper[4860]: I1014 15:52:54.062522 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:52:54 crc kubenswrapper[4860]: E1014 15:52:54.063416 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:53:05 crc kubenswrapper[4860]: I1014 15:53:05.061690 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:53:05 crc kubenswrapper[4860]: E1014 15:53:05.062646 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:53:20 crc kubenswrapper[4860]: I1014 15:53:20.062414 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:53:20 crc kubenswrapper[4860]: E1014 15:53:20.063257 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:53:34 crc kubenswrapper[4860]: I1014 15:53:34.061702 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:53:34 crc kubenswrapper[4860]: I1014 15:53:34.876563 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"2fd53b577a55975b25e3711ff8380492773306b6de8206b162e89c99b2de187a"} Oct 14 15:55:59 crc kubenswrapper[4860]: I1014 15:55:59.246257 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:55:59 crc kubenswrapper[4860]: I1014 15:55:59.246855 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.895611 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-44plt"] Oct 14 15:56:16 crc kubenswrapper[4860]: E1014 15:56:16.897542 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="extract-content" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.897627 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="extract-content" Oct 14 15:56:16 crc kubenswrapper[4860]: E1014 15:56:16.897700 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="extract-utilities" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.897753 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="extract-utilities" Oct 14 15:56:16 crc kubenswrapper[4860]: E1014 15:56:16.897817 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="registry-server" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.897867 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="registry-server" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.898128 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="699e6482-e421-4a0a-b00e-8378366000ba" containerName="registry-server" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.899563 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:16 crc kubenswrapper[4860]: I1014 15:56:16.904333 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44plt"] Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.055601 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nmqn\" (UniqueName: \"kubernetes.io/projected/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-kube-api-access-9nmqn\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.055723 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-catalog-content\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.055822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-utilities\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.158076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-utilities\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.158241 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nmqn\" (UniqueName: \"kubernetes.io/projected/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-kube-api-access-9nmqn\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.158332 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-catalog-content\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.158531 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-utilities\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.158893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-catalog-content\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.181164 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nmqn\" (UniqueName: \"kubernetes.io/projected/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-kube-api-access-9nmqn\") pod \"community-operators-44plt\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.218510 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:17 crc kubenswrapper[4860]: W1014 15:56:17.814955 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2305ec9e_5df3_4c4e_816a_e0e9eec976f3.slice/crio-0ee3635d7ce9990a5d38e1dc8fa25d8d52bf8a8ebb625e35d38ac713adc647e0 WatchSource:0}: Error finding container 0ee3635d7ce9990a5d38e1dc8fa25d8d52bf8a8ebb625e35d38ac713adc647e0: Status 404 returned error can't find the container with id 0ee3635d7ce9990a5d38e1dc8fa25d8d52bf8a8ebb625e35d38ac713adc647e0 Oct 14 15:56:17 crc kubenswrapper[4860]: I1014 15:56:17.815003 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44plt"] Oct 14 15:56:18 crc kubenswrapper[4860]: I1014 15:56:18.440251 4860 generic.go:334] "Generic (PLEG): container finished" podID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerID="04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3" exitCode=0 Oct 14 15:56:18 crc kubenswrapper[4860]: I1014 15:56:18.440372 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerDied","Data":"04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3"} Oct 14 15:56:18 crc kubenswrapper[4860]: I1014 15:56:18.440432 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerStarted","Data":"0ee3635d7ce9990a5d38e1dc8fa25d8d52bf8a8ebb625e35d38ac713adc647e0"} Oct 14 15:56:20 crc kubenswrapper[4860]: I1014 15:56:20.458106 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerStarted","Data":"8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79"} Oct 14 15:56:21 crc kubenswrapper[4860]: I1014 15:56:21.468619 4860 generic.go:334] "Generic (PLEG): container finished" podID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerID="8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79" exitCode=0 Oct 14 15:56:21 crc kubenswrapper[4860]: I1014 15:56:21.468683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerDied","Data":"8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79"} Oct 14 15:56:23 crc kubenswrapper[4860]: I1014 15:56:23.492918 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerStarted","Data":"3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9"} Oct 14 15:56:23 crc kubenswrapper[4860]: I1014 15:56:23.519488 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-44plt" podStartSLOduration=3.877687403 podStartE2EDuration="7.519467556s" podCreationTimestamp="2025-10-14 15:56:16 +0000 UTC" firstStartedPulling="2025-10-14 15:56:18.443170043 +0000 UTC m=+4040.029953492" lastFinishedPulling="2025-10-14 15:56:22.084950196 +0000 UTC m=+4043.671733645" observedRunningTime="2025-10-14 15:56:23.510272254 +0000 UTC m=+4045.097055713" watchObservedRunningTime="2025-10-14 15:56:23.519467556 +0000 UTC m=+4045.106251005" Oct 14 15:56:27 crc kubenswrapper[4860]: I1014 15:56:27.219159 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:27 crc kubenswrapper[4860]: I1014 15:56:27.219746 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:27 crc kubenswrapper[4860]: I1014 15:56:27.271172 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:27 crc kubenswrapper[4860]: I1014 15:56:27.572597 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:27 crc kubenswrapper[4860]: I1014 15:56:27.621989 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44plt"] Oct 14 15:56:29 crc kubenswrapper[4860]: I1014 15:56:29.246117 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:56:29 crc kubenswrapper[4860]: I1014 15:56:29.246492 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:56:29 crc kubenswrapper[4860]: I1014 15:56:29.539638 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-44plt" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="registry-server" containerID="cri-o://3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9" gracePeriod=2 Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.144635 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.220212 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nmqn\" (UniqueName: \"kubernetes.io/projected/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-kube-api-access-9nmqn\") pod \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.220265 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-catalog-content\") pod \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.220485 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-utilities\") pod \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\" (UID: \"2305ec9e-5df3-4c4e-816a-e0e9eec976f3\") " Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.222115 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-utilities" (OuterVolumeSpecName: "utilities") pod "2305ec9e-5df3-4c4e-816a-e0e9eec976f3" (UID: "2305ec9e-5df3-4c4e-816a-e0e9eec976f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.241681 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-kube-api-access-9nmqn" (OuterVolumeSpecName: "kube-api-access-9nmqn") pod "2305ec9e-5df3-4c4e-816a-e0e9eec976f3" (UID: "2305ec9e-5df3-4c4e-816a-e0e9eec976f3"). InnerVolumeSpecName "kube-api-access-9nmqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.269655 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2305ec9e-5df3-4c4e-816a-e0e9eec976f3" (UID: "2305ec9e-5df3-4c4e-816a-e0e9eec976f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.323303 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nmqn\" (UniqueName: \"kubernetes.io/projected/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-kube-api-access-9nmqn\") on node \"crc\" DevicePath \"\"" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.323547 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.323655 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2305ec9e-5df3-4c4e-816a-e0e9eec976f3-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.552100 4860 generic.go:334] "Generic (PLEG): container finished" podID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerID="3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9" exitCode=0 Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.552144 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerDied","Data":"3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9"} Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.552172 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44plt" event={"ID":"2305ec9e-5df3-4c4e-816a-e0e9eec976f3","Type":"ContainerDied","Data":"0ee3635d7ce9990a5d38e1dc8fa25d8d52bf8a8ebb625e35d38ac713adc647e0"} Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.552194 4860 scope.go:117] "RemoveContainer" containerID="3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.552337 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44plt" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.593858 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44plt"] Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.601996 4860 scope.go:117] "RemoveContainer" containerID="8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.605360 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-44plt"] Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.632650 4860 scope.go:117] "RemoveContainer" containerID="04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.690631 4860 scope.go:117] "RemoveContainer" containerID="3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9" Oct 14 15:56:30 crc kubenswrapper[4860]: E1014 15:56:30.691349 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9\": container with ID starting with 3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9 not found: ID does not exist" containerID="3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.691401 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9"} err="failed to get container status \"3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9\": rpc error: code = NotFound desc = could not find container \"3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9\": container with ID starting with 3fe1771df9d4c955bdaa093199173339fd2d21340c1991fb42c39f669acb14a9 not found: ID does not exist" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.691476 4860 scope.go:117] "RemoveContainer" containerID="8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79" Oct 14 15:56:30 crc kubenswrapper[4860]: E1014 15:56:30.692256 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79\": container with ID starting with 8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79 not found: ID does not exist" containerID="8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.692496 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79"} err="failed to get container status \"8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79\": rpc error: code = NotFound desc = could not find container \"8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79\": container with ID starting with 8f92680c9494a55eaf6d80d23573e346c0dc94354bb229fb34c67edbf1affa79 not found: ID does not exist" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.692530 4860 scope.go:117] "RemoveContainer" containerID="04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3" Oct 14 15:56:30 crc kubenswrapper[4860]: E1014 15:56:30.693862 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3\": container with ID starting with 04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3 not found: ID does not exist" containerID="04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3" Oct 14 15:56:30 crc kubenswrapper[4860]: I1014 15:56:30.693899 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3"} err="failed to get container status \"04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3\": rpc error: code = NotFound desc = could not find container \"04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3\": container with ID starting with 04eb96ee582e69e7dacc8556dea5fe020cfc9dbe83f60699d95b31ba7c4f43b3 not found: ID does not exist" Oct 14 15:56:31 crc kubenswrapper[4860]: I1014 15:56:31.072797 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" path="/var/lib/kubelet/pods/2305ec9e-5df3-4c4e-816a-e0e9eec976f3/volumes" Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.248098 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.248665 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.248709 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.249832 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fd53b577a55975b25e3711ff8380492773306b6de8206b162e89c99b2de187a"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.249904 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://2fd53b577a55975b25e3711ff8380492773306b6de8206b162e89c99b2de187a" gracePeriod=600 Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.801574 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="2fd53b577a55975b25e3711ff8380492773306b6de8206b162e89c99b2de187a" exitCode=0 Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.801648 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"2fd53b577a55975b25e3711ff8380492773306b6de8206b162e89c99b2de187a"} Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.801917 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82"} Oct 14 15:56:59 crc kubenswrapper[4860]: I1014 15:56:59.801966 4860 scope.go:117] "RemoveContainer" containerID="ef80bfbd7da33e785a9bc74acb80dd35fee64b2f27e03c9fd84079b1218fd9ad" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.623683 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-szv4t"] Oct 14 15:57:01 crc kubenswrapper[4860]: E1014 15:57:01.625287 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="extract-content" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.625307 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="extract-content" Oct 14 15:57:01 crc kubenswrapper[4860]: E1014 15:57:01.625336 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="extract-utilities" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.625345 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="extract-utilities" Oct 14 15:57:01 crc kubenswrapper[4860]: E1014 15:57:01.625370 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="registry-server" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.625378 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="registry-server" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.625589 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2305ec9e-5df3-4c4e-816a-e0e9eec976f3" containerName="registry-server" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.636281 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.649590 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-szv4t"] Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.733404 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-utilities\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.733469 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-catalog-content\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.733549 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22t64\" (UniqueName: \"kubernetes.io/projected/1cee1347-a474-4493-a969-d9e3e7f9ec27-kube-api-access-22t64\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.835487 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-utilities\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.835543 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-catalog-content\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.835625 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22t64\" (UniqueName: \"kubernetes.io/projected/1cee1347-a474-4493-a969-d9e3e7f9ec27-kube-api-access-22t64\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.836081 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-utilities\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.836144 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-catalog-content\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.861140 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22t64\" (UniqueName: \"kubernetes.io/projected/1cee1347-a474-4493-a969-d9e3e7f9ec27-kube-api-access-22t64\") pod \"redhat-marketplace-szv4t\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:01 crc kubenswrapper[4860]: I1014 15:57:01.984466 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:02 crc kubenswrapper[4860]: I1014 15:57:02.532742 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-szv4t"] Oct 14 15:57:02 crc kubenswrapper[4860]: I1014 15:57:02.839793 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szv4t" event={"ID":"1cee1347-a474-4493-a969-d9e3e7f9ec27","Type":"ContainerStarted","Data":"6e00da9c997e09b285f20c160c3f0219e17c30ff123411703041fd6f0d018e63"} Oct 14 15:57:03 crc kubenswrapper[4860]: I1014 15:57:03.850930 4860 generic.go:334] "Generic (PLEG): container finished" podID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerID="4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e" exitCode=0 Oct 14 15:57:03 crc kubenswrapper[4860]: I1014 15:57:03.851351 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szv4t" event={"ID":"1cee1347-a474-4493-a969-d9e3e7f9ec27","Type":"ContainerDied","Data":"4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e"} Oct 14 15:57:03 crc kubenswrapper[4860]: I1014 15:57:03.858439 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 15:57:05 crc kubenswrapper[4860]: I1014 15:57:05.884429 4860 generic.go:334] "Generic (PLEG): container finished" podID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerID="e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6" exitCode=0 Oct 14 15:57:05 crc kubenswrapper[4860]: I1014 15:57:05.884474 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szv4t" event={"ID":"1cee1347-a474-4493-a969-d9e3e7f9ec27","Type":"ContainerDied","Data":"e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6"} Oct 14 15:57:06 crc kubenswrapper[4860]: I1014 15:57:06.896545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szv4t" event={"ID":"1cee1347-a474-4493-a969-d9e3e7f9ec27","Type":"ContainerStarted","Data":"971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50"} Oct 14 15:57:06 crc kubenswrapper[4860]: I1014 15:57:06.922816 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-szv4t" podStartSLOduration=3.462676868 podStartE2EDuration="5.922798413s" podCreationTimestamp="2025-10-14 15:57:01 +0000 UTC" firstStartedPulling="2025-10-14 15:57:03.858211714 +0000 UTC m=+4085.444995163" lastFinishedPulling="2025-10-14 15:57:06.318333259 +0000 UTC m=+4087.905116708" observedRunningTime="2025-10-14 15:57:06.922215248 +0000 UTC m=+4088.508998707" watchObservedRunningTime="2025-10-14 15:57:06.922798413 +0000 UTC m=+4088.509581862" Oct 14 15:57:11 crc kubenswrapper[4860]: I1014 15:57:11.985300 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:11 crc kubenswrapper[4860]: I1014 15:57:11.985913 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:12 crc kubenswrapper[4860]: I1014 15:57:12.559868 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:13 crc kubenswrapper[4860]: I1014 15:57:13.014715 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:13 crc kubenswrapper[4860]: I1014 15:57:13.058628 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-szv4t"] Oct 14 15:57:14 crc kubenswrapper[4860]: I1014 15:57:14.974476 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-szv4t" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="registry-server" containerID="cri-o://971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50" gracePeriod=2 Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.610908 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.804041 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22t64\" (UniqueName: \"kubernetes.io/projected/1cee1347-a474-4493-a969-d9e3e7f9ec27-kube-api-access-22t64\") pod \"1cee1347-a474-4493-a969-d9e3e7f9ec27\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.804487 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-catalog-content\") pod \"1cee1347-a474-4493-a969-d9e3e7f9ec27\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.812761 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cee1347-a474-4493-a969-d9e3e7f9ec27-kube-api-access-22t64" (OuterVolumeSpecName: "kube-api-access-22t64") pod "1cee1347-a474-4493-a969-d9e3e7f9ec27" (UID: "1cee1347-a474-4493-a969-d9e3e7f9ec27"). InnerVolumeSpecName "kube-api-access-22t64". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.837267 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-utilities\") pod \"1cee1347-a474-4493-a969-d9e3e7f9ec27\" (UID: \"1cee1347-a474-4493-a969-d9e3e7f9ec27\") " Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.846922 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-utilities" (OuterVolumeSpecName: "utilities") pod "1cee1347-a474-4493-a969-d9e3e7f9ec27" (UID: "1cee1347-a474-4493-a969-d9e3e7f9ec27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.848248 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cee1347-a474-4493-a969-d9e3e7f9ec27" (UID: "1cee1347-a474-4493-a969-d9e3e7f9ec27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.940041 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22t64\" (UniqueName: \"kubernetes.io/projected/1cee1347-a474-4493-a969-d9e3e7f9ec27-kube-api-access-22t64\") on node \"crc\" DevicePath \"\"" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.940073 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.940083 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cee1347-a474-4493-a969-d9e3e7f9ec27-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.983391 4860 generic.go:334] "Generic (PLEG): container finished" podID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerID="971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50" exitCode=0 Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.983455 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szv4t" event={"ID":"1cee1347-a474-4493-a969-d9e3e7f9ec27","Type":"ContainerDied","Data":"971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50"} Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.983492 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szv4t" event={"ID":"1cee1347-a474-4493-a969-d9e3e7f9ec27","Type":"ContainerDied","Data":"6e00da9c997e09b285f20c160c3f0219e17c30ff123411703041fd6f0d018e63"} Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.983489 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szv4t" Oct 14 15:57:15 crc kubenswrapper[4860]: I1014 15:57:15.983564 4860 scope.go:117] "RemoveContainer" containerID="971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.005394 4860 scope.go:117] "RemoveContainer" containerID="e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.022775 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-szv4t"] Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.033198 4860 scope.go:117] "RemoveContainer" containerID="4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.033212 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-szv4t"] Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.085132 4860 scope.go:117] "RemoveContainer" containerID="971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50" Oct 14 15:57:16 crc kubenswrapper[4860]: E1014 15:57:16.085750 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50\": container with ID starting with 971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50 not found: ID does not exist" containerID="971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.085798 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50"} err="failed to get container status \"971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50\": rpc error: code = NotFound desc = could not find container \"971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50\": container with ID starting with 971e9d044fed51d2cb82faefc663968e5376b88c409f840a27ab70487ca19f50 not found: ID does not exist" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.085822 4860 scope.go:117] "RemoveContainer" containerID="e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6" Oct 14 15:57:16 crc kubenswrapper[4860]: E1014 15:57:16.086180 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6\": container with ID starting with e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6 not found: ID does not exist" containerID="e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.086215 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6"} err="failed to get container status \"e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6\": rpc error: code = NotFound desc = could not find container \"e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6\": container with ID starting with e4f9c6de7ab6f1c07d72157f453f6b13d558afc6c88b05f43e00355739bbb5e6 not found: ID does not exist" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.086240 4860 scope.go:117] "RemoveContainer" containerID="4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e" Oct 14 15:57:16 crc kubenswrapper[4860]: E1014 15:57:16.086638 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e\": container with ID starting with 4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e not found: ID does not exist" containerID="4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e" Oct 14 15:57:16 crc kubenswrapper[4860]: I1014 15:57:16.086673 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e"} err="failed to get container status \"4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e\": rpc error: code = NotFound desc = could not find container \"4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e\": container with ID starting with 4b4b315ed10cf3177cf06fbbb006f9c87f3cad96db866cf126f9f462f89d628e not found: ID does not exist" Oct 14 15:57:17 crc kubenswrapper[4860]: I1014 15:57:17.071441 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" path="/var/lib/kubelet/pods/1cee1347-a474-4493-a969-d9e3e7f9ec27/volumes" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.329081 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c92k9"] Oct 14 15:57:34 crc kubenswrapper[4860]: E1014 15:57:34.330242 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="extract-utilities" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.330255 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="extract-utilities" Oct 14 15:57:34 crc kubenswrapper[4860]: E1014 15:57:34.330270 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="registry-server" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.330276 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="registry-server" Oct 14 15:57:34 crc kubenswrapper[4860]: E1014 15:57:34.330303 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="extract-content" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.330310 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="extract-content" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.330482 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cee1347-a474-4493-a969-d9e3e7f9ec27" containerName="registry-server" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.332051 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.344357 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c92k9"] Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.396852 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-utilities\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.397383 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-catalog-content\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.397579 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t4c\" (UniqueName: \"kubernetes.io/projected/ec24365e-b7b6-4322-a9fa-0715fb1d9835-kube-api-access-75t4c\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.499050 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75t4c\" (UniqueName: \"kubernetes.io/projected/ec24365e-b7b6-4322-a9fa-0715fb1d9835-kube-api-access-75t4c\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.499538 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-utilities\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.499647 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-catalog-content\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.500114 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-catalog-content\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.500676 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-utilities\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.519462 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75t4c\" (UniqueName: \"kubernetes.io/projected/ec24365e-b7b6-4322-a9fa-0715fb1d9835-kube-api-access-75t4c\") pod \"redhat-operators-c92k9\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:34 crc kubenswrapper[4860]: I1014 15:57:34.657966 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:35 crc kubenswrapper[4860]: I1014 15:57:35.172759 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c92k9"] Oct 14 15:57:36 crc kubenswrapper[4860]: I1014 15:57:36.156947 4860 generic.go:334] "Generic (PLEG): container finished" podID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerID="7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce" exitCode=0 Oct 14 15:57:36 crc kubenswrapper[4860]: I1014 15:57:36.157119 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerDied","Data":"7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce"} Oct 14 15:57:36 crc kubenswrapper[4860]: I1014 15:57:36.157273 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerStarted","Data":"d5282036eff090d9322167a39d324959110bd33bf0d579f4eb83a744bc74ec97"} Oct 14 15:57:38 crc kubenswrapper[4860]: I1014 15:57:38.176091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerStarted","Data":"3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f"} Oct 14 15:57:42 crc kubenswrapper[4860]: I1014 15:57:42.210628 4860 generic.go:334] "Generic (PLEG): container finished" podID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerID="3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f" exitCode=0 Oct 14 15:57:42 crc kubenswrapper[4860]: I1014 15:57:42.210674 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerDied","Data":"3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f"} Oct 14 15:57:43 crc kubenswrapper[4860]: I1014 15:57:43.222480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerStarted","Data":"840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf"} Oct 14 15:57:43 crc kubenswrapper[4860]: I1014 15:57:43.244197 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c92k9" podStartSLOduration=2.736882758 podStartE2EDuration="9.244176426s" podCreationTimestamp="2025-10-14 15:57:34 +0000 UTC" firstStartedPulling="2025-10-14 15:57:36.15871743 +0000 UTC m=+4117.745500879" lastFinishedPulling="2025-10-14 15:57:42.666011098 +0000 UTC m=+4124.252794547" observedRunningTime="2025-10-14 15:57:43.241917321 +0000 UTC m=+4124.828700790" watchObservedRunningTime="2025-10-14 15:57:43.244176426 +0000 UTC m=+4124.830959875" Oct 14 15:57:44 crc kubenswrapper[4860]: I1014 15:57:44.658734 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:44 crc kubenswrapper[4860]: I1014 15:57:44.659154 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:45 crc kubenswrapper[4860]: I1014 15:57:45.725443 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c92k9" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="registry-server" probeResult="failure" output=< Oct 14 15:57:45 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 15:57:45 crc kubenswrapper[4860]: > Oct 14 15:57:54 crc kubenswrapper[4860]: I1014 15:57:54.709643 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:54 crc kubenswrapper[4860]: I1014 15:57:54.765403 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:54 crc kubenswrapper[4860]: I1014 15:57:54.947391 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c92k9"] Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.329606 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c92k9" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="registry-server" containerID="cri-o://840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf" gracePeriod=2 Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.848668 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.944780 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-utilities\") pod \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.945159 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75t4c\" (UniqueName: \"kubernetes.io/projected/ec24365e-b7b6-4322-a9fa-0715fb1d9835-kube-api-access-75t4c\") pod \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.945366 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-catalog-content\") pod \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\" (UID: \"ec24365e-b7b6-4322-a9fa-0715fb1d9835\") " Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.945819 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-utilities" (OuterVolumeSpecName: "utilities") pod "ec24365e-b7b6-4322-a9fa-0715fb1d9835" (UID: "ec24365e-b7b6-4322-a9fa-0715fb1d9835"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.946018 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 15:57:56 crc kubenswrapper[4860]: I1014 15:57:56.950161 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec24365e-b7b6-4322-a9fa-0715fb1d9835-kube-api-access-75t4c" (OuterVolumeSpecName: "kube-api-access-75t4c") pod "ec24365e-b7b6-4322-a9fa-0715fb1d9835" (UID: "ec24365e-b7b6-4322-a9fa-0715fb1d9835"). InnerVolumeSpecName "kube-api-access-75t4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.046410 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec24365e-b7b6-4322-a9fa-0715fb1d9835" (UID: "ec24365e-b7b6-4322-a9fa-0715fb1d9835"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.047998 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec24365e-b7b6-4322-a9fa-0715fb1d9835-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.048051 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75t4c\" (UniqueName: \"kubernetes.io/projected/ec24365e-b7b6-4322-a9fa-0715fb1d9835-kube-api-access-75t4c\") on node \"crc\" DevicePath \"\"" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.340083 4860 generic.go:334] "Generic (PLEG): container finished" podID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerID="840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf" exitCode=0 Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.340126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerDied","Data":"840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf"} Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.340151 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c92k9" event={"ID":"ec24365e-b7b6-4322-a9fa-0715fb1d9835","Type":"ContainerDied","Data":"d5282036eff090d9322167a39d324959110bd33bf0d579f4eb83a744bc74ec97"} Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.340167 4860 scope.go:117] "RemoveContainer" containerID="840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.340285 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c92k9" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.372034 4860 scope.go:117] "RemoveContainer" containerID="3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.372440 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c92k9"] Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.380782 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c92k9"] Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.392120 4860 scope.go:117] "RemoveContainer" containerID="7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.443159 4860 scope.go:117] "RemoveContainer" containerID="840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf" Oct 14 15:57:57 crc kubenswrapper[4860]: E1014 15:57:57.443651 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf\": container with ID starting with 840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf not found: ID does not exist" containerID="840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.443686 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf"} err="failed to get container status \"840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf\": rpc error: code = NotFound desc = could not find container \"840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf\": container with ID starting with 840d0f376f5efc99029af55cd309c9a1a590242da3456d033dab04e678ada7bf not found: ID does not exist" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.443704 4860 scope.go:117] "RemoveContainer" containerID="3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f" Oct 14 15:57:57 crc kubenswrapper[4860]: E1014 15:57:57.443983 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f\": container with ID starting with 3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f not found: ID does not exist" containerID="3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.444161 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f"} err="failed to get container status \"3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f\": rpc error: code = NotFound desc = could not find container \"3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f\": container with ID starting with 3e3a0fbd3c61fccc131d95d3ba95ac0ca44524149ac93d907369b72074b0d88f not found: ID does not exist" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.444192 4860 scope.go:117] "RemoveContainer" containerID="7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce" Oct 14 15:57:57 crc kubenswrapper[4860]: E1014 15:57:57.444461 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce\": container with ID starting with 7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce not found: ID does not exist" containerID="7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce" Oct 14 15:57:57 crc kubenswrapper[4860]: I1014 15:57:57.444488 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce"} err="failed to get container status \"7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce\": rpc error: code = NotFound desc = could not find container \"7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce\": container with ID starting with 7256cdd2844e742f6f49f7bdf99a56cba00d8ee1cb2a23b6693d358facdabdce not found: ID does not exist" Oct 14 15:57:59 crc kubenswrapper[4860]: I1014 15:57:59.073461 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" path="/var/lib/kubelet/pods/ec24365e-b7b6-4322-a9fa-0715fb1d9835/volumes" Oct 14 15:58:59 crc kubenswrapper[4860]: I1014 15:58:59.245481 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:58:59 crc kubenswrapper[4860]: I1014 15:58:59.246068 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:59:29 crc kubenswrapper[4860]: I1014 15:59:29.245596 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:59:29 crc kubenswrapper[4860]: I1014 15:59:29.246334 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.246164 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.246846 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.246901 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.247806 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.247869 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" gracePeriod=600 Oct 14 15:59:59 crc kubenswrapper[4860]: E1014 15:59:59.382553 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.400412 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" exitCode=0 Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.400464 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82"} Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.400502 4860 scope.go:117] "RemoveContainer" containerID="2fd53b577a55975b25e3711ff8380492773306b6de8206b162e89c99b2de187a" Oct 14 15:59:59 crc kubenswrapper[4860]: I1014 15:59:59.401305 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 15:59:59 crc kubenswrapper[4860]: E1014 15:59:59.401659 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.150229 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww"] Oct 14 16:00:00 crc kubenswrapper[4860]: E1014 16:00:00.150901 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="registry-server" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.150919 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="registry-server" Oct 14 16:00:00 crc kubenswrapper[4860]: E1014 16:00:00.150945 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="extract-content" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.150951 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="extract-content" Oct 14 16:00:00 crc kubenswrapper[4860]: E1014 16:00:00.150970 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="extract-utilities" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.150978 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="extract-utilities" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.151169 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec24365e-b7b6-4322-a9fa-0715fb1d9835" containerName="registry-server" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.151849 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.156025 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.156301 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.166976 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww"] Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.208953 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e0047b-1b11-4e42-beed-aa0d5f699c4e-config-volume\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.209160 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e0047b-1b11-4e42-beed-aa0d5f699c4e-secret-volume\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.209342 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9fhs\" (UniqueName: \"kubernetes.io/projected/06e0047b-1b11-4e42-beed-aa0d5f699c4e-kube-api-access-t9fhs\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.310851 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e0047b-1b11-4e42-beed-aa0d5f699c4e-secret-volume\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.310980 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9fhs\" (UniqueName: \"kubernetes.io/projected/06e0047b-1b11-4e42-beed-aa0d5f699c4e-kube-api-access-t9fhs\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.311113 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e0047b-1b11-4e42-beed-aa0d5f699c4e-config-volume\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.312630 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e0047b-1b11-4e42-beed-aa0d5f699c4e-config-volume\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.317401 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e0047b-1b11-4e42-beed-aa0d5f699c4e-secret-volume\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.331214 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9fhs\" (UniqueName: \"kubernetes.io/projected/06e0047b-1b11-4e42-beed-aa0d5f699c4e-kube-api-access-t9fhs\") pod \"collect-profiles-29340960-67lww\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.471203 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:00 crc kubenswrapper[4860]: I1014 16:00:00.949943 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww"] Oct 14 16:00:01 crc kubenswrapper[4860]: I1014 16:00:01.429556 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" event={"ID":"06e0047b-1b11-4e42-beed-aa0d5f699c4e","Type":"ContainerStarted","Data":"3688695ca2fb7acfcc5ed930fe17ac085e81078bceb6d71dde43c6d171ba2546"} Oct 14 16:00:01 crc kubenswrapper[4860]: I1014 16:00:01.431485 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" event={"ID":"06e0047b-1b11-4e42-beed-aa0d5f699c4e","Type":"ContainerStarted","Data":"bed3b904ecd9f9b37f4ecc38e722c3a1fd82d61c983080eba9de47897841f738"} Oct 14 16:00:01 crc kubenswrapper[4860]: I1014 16:00:01.449070 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" podStartSLOduration=1.449049313 podStartE2EDuration="1.449049313s" podCreationTimestamp="2025-10-14 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 16:00:01.444726909 +0000 UTC m=+4263.031510358" watchObservedRunningTime="2025-10-14 16:00:01.449049313 +0000 UTC m=+4263.035832782" Oct 14 16:00:02 crc kubenswrapper[4860]: I1014 16:00:02.439649 4860 generic.go:334] "Generic (PLEG): container finished" podID="06e0047b-1b11-4e42-beed-aa0d5f699c4e" containerID="3688695ca2fb7acfcc5ed930fe17ac085e81078bceb6d71dde43c6d171ba2546" exitCode=0 Oct 14 16:00:02 crc kubenswrapper[4860]: I1014 16:00:02.439695 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" event={"ID":"06e0047b-1b11-4e42-beed-aa0d5f699c4e","Type":"ContainerDied","Data":"3688695ca2fb7acfcc5ed930fe17ac085e81078bceb6d71dde43c6d171ba2546"} Oct 14 16:00:03 crc kubenswrapper[4860]: I1014 16:00:03.877067 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:03 crc kubenswrapper[4860]: I1014 16:00:03.999345 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e0047b-1b11-4e42-beed-aa0d5f699c4e-secret-volume\") pod \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " Oct 14 16:00:03 crc kubenswrapper[4860]: I1014 16:00:03.999431 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9fhs\" (UniqueName: \"kubernetes.io/projected/06e0047b-1b11-4e42-beed-aa0d5f699c4e-kube-api-access-t9fhs\") pod \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " Oct 14 16:00:03 crc kubenswrapper[4860]: I1014 16:00:03.999726 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e0047b-1b11-4e42-beed-aa0d5f699c4e-config-volume\") pod \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\" (UID: \"06e0047b-1b11-4e42-beed-aa0d5f699c4e\") " Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.000762 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e0047b-1b11-4e42-beed-aa0d5f699c4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "06e0047b-1b11-4e42-beed-aa0d5f699c4e" (UID: "06e0047b-1b11-4e42-beed-aa0d5f699c4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.005119 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e0047b-1b11-4e42-beed-aa0d5f699c4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06e0047b-1b11-4e42-beed-aa0d5f699c4e" (UID: "06e0047b-1b11-4e42-beed-aa0d5f699c4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.005377 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e0047b-1b11-4e42-beed-aa0d5f699c4e-kube-api-access-t9fhs" (OuterVolumeSpecName: "kube-api-access-t9fhs") pod "06e0047b-1b11-4e42-beed-aa0d5f699c4e" (UID: "06e0047b-1b11-4e42-beed-aa0d5f699c4e"). InnerVolumeSpecName "kube-api-access-t9fhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.102150 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e0047b-1b11-4e42-beed-aa0d5f699c4e-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.102204 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e0047b-1b11-4e42-beed-aa0d5f699c4e-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.102214 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9fhs\" (UniqueName: \"kubernetes.io/projected/06e0047b-1b11-4e42-beed-aa0d5f699c4e-kube-api-access-t9fhs\") on node \"crc\" DevicePath \"\"" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.460063 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" event={"ID":"06e0047b-1b11-4e42-beed-aa0d5f699c4e","Type":"ContainerDied","Data":"bed3b904ecd9f9b37f4ecc38e722c3a1fd82d61c983080eba9de47897841f738"} Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.460107 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed3b904ecd9f9b37f4ecc38e722c3a1fd82d61c983080eba9de47897841f738" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.460145 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340960-67lww" Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.524697 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s"] Oct 14 16:00:04 crc kubenswrapper[4860]: I1014 16:00:04.532902 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340915-cq49s"] Oct 14 16:00:05 crc kubenswrapper[4860]: I1014 16:00:05.072819 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba36c59-e61d-461d-b62a-90faf793dff6" path="/var/lib/kubelet/pods/eba36c59-e61d-461d-b62a-90faf793dff6/volumes" Oct 14 16:00:14 crc kubenswrapper[4860]: I1014 16:00:14.062934 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:00:14 crc kubenswrapper[4860]: E1014 16:00:14.063958 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:00:18 crc kubenswrapper[4860]: I1014 16:00:18.472471 4860 scope.go:117] "RemoveContainer" containerID="4027bc952008939a6a6ddb3b33f74539e9b75b742b9d75639ed7b72170dc7781" Oct 14 16:00:27 crc kubenswrapper[4860]: I1014 16:00:27.062167 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:00:27 crc kubenswrapper[4860]: E1014 16:00:27.062958 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:00:40 crc kubenswrapper[4860]: I1014 16:00:40.061522 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:00:40 crc kubenswrapper[4860]: E1014 16:00:40.062428 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:00:51 crc kubenswrapper[4860]: I1014 16:00:51.061841 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:00:51 crc kubenswrapper[4860]: E1014 16:00:51.062718 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.164136 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29340961-lsqnc"] Oct 14 16:01:00 crc kubenswrapper[4860]: E1014 16:01:00.165297 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e0047b-1b11-4e42-beed-aa0d5f699c4e" containerName="collect-profiles" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.165317 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e0047b-1b11-4e42-beed-aa0d5f699c4e" containerName="collect-profiles" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.165581 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e0047b-1b11-4e42-beed-aa0d5f699c4e" containerName="collect-profiles" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.166631 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.186856 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340961-lsqnc"] Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.296998 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-config-data\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.297739 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-combined-ca-bundle\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.297842 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-fernet-keys\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.298019 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnnw\" (UniqueName: \"kubernetes.io/projected/e39034f1-fd48-4b12-a14c-55abc2828764-kube-api-access-kpnnw\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.399621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnnw\" (UniqueName: \"kubernetes.io/projected/e39034f1-fd48-4b12-a14c-55abc2828764-kube-api-access-kpnnw\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.399734 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-config-data\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.399772 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-combined-ca-bundle\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.399815 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-fernet-keys\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.406333 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-combined-ca-bundle\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.407643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-fernet-keys\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.409042 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-config-data\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.418718 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnnw\" (UniqueName: \"kubernetes.io/projected/e39034f1-fd48-4b12-a14c-55abc2828764-kube-api-access-kpnnw\") pod \"keystone-cron-29340961-lsqnc\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.510355 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:00 crc kubenswrapper[4860]: I1014 16:01:00.995816 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340961-lsqnc"] Oct 14 16:01:01 crc kubenswrapper[4860]: I1014 16:01:01.144559 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340961-lsqnc" event={"ID":"e39034f1-fd48-4b12-a14c-55abc2828764","Type":"ContainerStarted","Data":"ea2a722c0eb2919d26dd39559a09979f9cc9f4e899707301bed0fef0ecbfcbf9"} Oct 14 16:01:02 crc kubenswrapper[4860]: I1014 16:01:02.064781 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:01:02 crc kubenswrapper[4860]: E1014 16:01:02.065667 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:01:02 crc kubenswrapper[4860]: I1014 16:01:02.154341 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340961-lsqnc" event={"ID":"e39034f1-fd48-4b12-a14c-55abc2828764","Type":"ContainerStarted","Data":"68ca0cfd08e445bef065281dc25c38588afd3f51c0e0a93c7a4cd4d22322c028"} Oct 14 16:01:02 crc kubenswrapper[4860]: I1014 16:01:02.188487 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29340961-lsqnc" podStartSLOduration=2.188293337 podStartE2EDuration="2.188293337s" podCreationTimestamp="2025-10-14 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 16:01:02.181059271 +0000 UTC m=+4323.767842730" watchObservedRunningTime="2025-10-14 16:01:02.188293337 +0000 UTC m=+4323.775076786" Oct 14 16:01:06 crc kubenswrapper[4860]: I1014 16:01:06.195144 4860 generic.go:334] "Generic (PLEG): container finished" podID="e39034f1-fd48-4b12-a14c-55abc2828764" containerID="68ca0cfd08e445bef065281dc25c38588afd3f51c0e0a93c7a4cd4d22322c028" exitCode=0 Oct 14 16:01:06 crc kubenswrapper[4860]: I1014 16:01:06.195192 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340961-lsqnc" event={"ID":"e39034f1-fd48-4b12-a14c-55abc2828764","Type":"ContainerDied","Data":"68ca0cfd08e445bef065281dc25c38588afd3f51c0e0a93c7a4cd4d22322c028"} Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.654681 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.843607 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpnnw\" (UniqueName: \"kubernetes.io/projected/e39034f1-fd48-4b12-a14c-55abc2828764-kube-api-access-kpnnw\") pod \"e39034f1-fd48-4b12-a14c-55abc2828764\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.843666 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-config-data\") pod \"e39034f1-fd48-4b12-a14c-55abc2828764\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.843867 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-fernet-keys\") pod \"e39034f1-fd48-4b12-a14c-55abc2828764\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.843924 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-combined-ca-bundle\") pod \"e39034f1-fd48-4b12-a14c-55abc2828764\" (UID: \"e39034f1-fd48-4b12-a14c-55abc2828764\") " Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.849275 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e39034f1-fd48-4b12-a14c-55abc2828764" (UID: "e39034f1-fd48-4b12-a14c-55abc2828764"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.854871 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39034f1-fd48-4b12-a14c-55abc2828764-kube-api-access-kpnnw" (OuterVolumeSpecName: "kube-api-access-kpnnw") pod "e39034f1-fd48-4b12-a14c-55abc2828764" (UID: "e39034f1-fd48-4b12-a14c-55abc2828764"). InnerVolumeSpecName "kube-api-access-kpnnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.880692 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e39034f1-fd48-4b12-a14c-55abc2828764" (UID: "e39034f1-fd48-4b12-a14c-55abc2828764"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.896505 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-config-data" (OuterVolumeSpecName: "config-data") pod "e39034f1-fd48-4b12-a14c-55abc2828764" (UID: "e39034f1-fd48-4b12-a14c-55abc2828764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.946808 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.946843 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.946857 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpnnw\" (UniqueName: \"kubernetes.io/projected/e39034f1-fd48-4b12-a14c-55abc2828764-kube-api-access-kpnnw\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:07 crc kubenswrapper[4860]: I1014 16:01:07.946866 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39034f1-fd48-4b12-a14c-55abc2828764-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:08 crc kubenswrapper[4860]: I1014 16:01:08.226438 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340961-lsqnc" event={"ID":"e39034f1-fd48-4b12-a14c-55abc2828764","Type":"ContainerDied","Data":"ea2a722c0eb2919d26dd39559a09979f9cc9f4e899707301bed0fef0ecbfcbf9"} Oct 14 16:01:08 crc kubenswrapper[4860]: I1014 16:01:08.226762 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2a722c0eb2919d26dd39559a09979f9cc9f4e899707301bed0fef0ecbfcbf9" Oct 14 16:01:08 crc kubenswrapper[4860]: I1014 16:01:08.226489 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340961-lsqnc" Oct 14 16:01:16 crc kubenswrapper[4860]: I1014 16:01:16.062919 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:01:16 crc kubenswrapper[4860]: E1014 16:01:16.064248 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:01:25 crc kubenswrapper[4860]: I1014 16:01:25.537243 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="33f4677b-3c11-4662-9129-35805ee9cab0" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.171:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.007719 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4xbx"] Oct 14 16:01:31 crc kubenswrapper[4860]: E1014 16:01:31.008628 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39034f1-fd48-4b12-a14c-55abc2828764" containerName="keystone-cron" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.008640 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39034f1-fd48-4b12-a14c-55abc2828764" containerName="keystone-cron" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.008887 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39034f1-fd48-4b12-a14c-55abc2828764" containerName="keystone-cron" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.010228 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.017563 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4xbx"] Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.065113 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:01:31 crc kubenswrapper[4860]: E1014 16:01:31.065381 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.078378 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-utilities\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.078682 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gnw\" (UniqueName: \"kubernetes.io/projected/f4738335-5957-406d-9e61-1eaf329d93cc-kube-api-access-s2gnw\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.078740 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-catalog-content\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.180438 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gnw\" (UniqueName: \"kubernetes.io/projected/f4738335-5957-406d-9e61-1eaf329d93cc-kube-api-access-s2gnw\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.180766 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-catalog-content\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.180813 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-utilities\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.184201 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-catalog-content\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.184387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-utilities\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.211705 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gnw\" (UniqueName: \"kubernetes.io/projected/f4738335-5957-406d-9e61-1eaf329d93cc-kube-api-access-s2gnw\") pod \"certified-operators-c4xbx\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.359143 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:31 crc kubenswrapper[4860]: I1014 16:01:31.918623 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4xbx"] Oct 14 16:01:32 crc kubenswrapper[4860]: I1014 16:01:32.454529 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4738335-5957-406d-9e61-1eaf329d93cc" containerID="f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534" exitCode=0 Oct 14 16:01:32 crc kubenswrapper[4860]: I1014 16:01:32.454643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xbx" event={"ID":"f4738335-5957-406d-9e61-1eaf329d93cc","Type":"ContainerDied","Data":"f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534"} Oct 14 16:01:32 crc kubenswrapper[4860]: I1014 16:01:32.455680 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xbx" event={"ID":"f4738335-5957-406d-9e61-1eaf329d93cc","Type":"ContainerStarted","Data":"3f5613deae38eff153428c78a2fde0e4d346f13e6e0a8e34092b4c680c657635"} Oct 14 16:01:34 crc kubenswrapper[4860]: I1014 16:01:34.477056 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4738335-5957-406d-9e61-1eaf329d93cc" containerID="47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da" exitCode=0 Oct 14 16:01:34 crc kubenswrapper[4860]: I1014 16:01:34.477137 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xbx" event={"ID":"f4738335-5957-406d-9e61-1eaf329d93cc","Type":"ContainerDied","Data":"47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da"} Oct 14 16:01:36 crc kubenswrapper[4860]: I1014 16:01:36.508074 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xbx" event={"ID":"f4738335-5957-406d-9e61-1eaf329d93cc","Type":"ContainerStarted","Data":"20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d"} Oct 14 16:01:36 crc kubenswrapper[4860]: I1014 16:01:36.531190 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4xbx" podStartSLOduration=3.474982903 podStartE2EDuration="6.531166219s" podCreationTimestamp="2025-10-14 16:01:30 +0000 UTC" firstStartedPulling="2025-10-14 16:01:32.456970342 +0000 UTC m=+4354.043753791" lastFinishedPulling="2025-10-14 16:01:35.513153658 +0000 UTC m=+4357.099937107" observedRunningTime="2025-10-14 16:01:36.526049915 +0000 UTC m=+4358.112833374" watchObservedRunningTime="2025-10-14 16:01:36.531166219 +0000 UTC m=+4358.117949668" Oct 14 16:01:41 crc kubenswrapper[4860]: I1014 16:01:41.359601 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:41 crc kubenswrapper[4860]: I1014 16:01:41.360115 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:41 crc kubenswrapper[4860]: I1014 16:01:41.405666 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:41 crc kubenswrapper[4860]: I1014 16:01:41.608713 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:41 crc kubenswrapper[4860]: I1014 16:01:41.668518 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4xbx"] Oct 14 16:01:43 crc kubenswrapper[4860]: I1014 16:01:43.062174 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:01:43 crc kubenswrapper[4860]: E1014 16:01:43.062797 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:01:43 crc kubenswrapper[4860]: I1014 16:01:43.571615 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4xbx" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="registry-server" containerID="cri-o://20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d" gracePeriod=2 Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.328887 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.449089 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-utilities\") pod \"f4738335-5957-406d-9e61-1eaf329d93cc\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.449214 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-catalog-content\") pod \"f4738335-5957-406d-9e61-1eaf329d93cc\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.449245 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2gnw\" (UniqueName: \"kubernetes.io/projected/f4738335-5957-406d-9e61-1eaf329d93cc-kube-api-access-s2gnw\") pod \"f4738335-5957-406d-9e61-1eaf329d93cc\" (UID: \"f4738335-5957-406d-9e61-1eaf329d93cc\") " Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.449888 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-utilities" (OuterVolumeSpecName: "utilities") pod "f4738335-5957-406d-9e61-1eaf329d93cc" (UID: "f4738335-5957-406d-9e61-1eaf329d93cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.464337 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4738335-5957-406d-9e61-1eaf329d93cc-kube-api-access-s2gnw" (OuterVolumeSpecName: "kube-api-access-s2gnw") pod "f4738335-5957-406d-9e61-1eaf329d93cc" (UID: "f4738335-5957-406d-9e61-1eaf329d93cc"). InnerVolumeSpecName "kube-api-access-s2gnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.496880 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4738335-5957-406d-9e61-1eaf329d93cc" (UID: "f4738335-5957-406d-9e61-1eaf329d93cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.552320 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.552354 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2gnw\" (UniqueName: \"kubernetes.io/projected/f4738335-5957-406d-9e61-1eaf329d93cc-kube-api-access-s2gnw\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.552367 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4738335-5957-406d-9e61-1eaf329d93cc-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.580581 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4738335-5957-406d-9e61-1eaf329d93cc" containerID="20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d" exitCode=0 Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.580623 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xbx" event={"ID":"f4738335-5957-406d-9e61-1eaf329d93cc","Type":"ContainerDied","Data":"20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d"} Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.580646 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xbx" event={"ID":"f4738335-5957-406d-9e61-1eaf329d93cc","Type":"ContainerDied","Data":"3f5613deae38eff153428c78a2fde0e4d346f13e6e0a8e34092b4c680c657635"} Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.580666 4860 scope.go:117] "RemoveContainer" containerID="20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.580776 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xbx" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.615600 4860 scope.go:117] "RemoveContainer" containerID="47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.622985 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4xbx"] Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.644951 4860 scope.go:117] "RemoveContainer" containerID="f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.647367 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4xbx"] Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.691369 4860 scope.go:117] "RemoveContainer" containerID="20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d" Oct 14 16:01:44 crc kubenswrapper[4860]: E1014 16:01:44.692840 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d\": container with ID starting with 20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d not found: ID does not exist" containerID="20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.692879 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d"} err="failed to get container status \"20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d\": rpc error: code = NotFound desc = could not find container \"20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d\": container with ID starting with 20b99917217fefff5408c33bc51ee811ae760c1dc2f5c124ea75c6f7655e1d9d not found: ID does not exist" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.692907 4860 scope.go:117] "RemoveContainer" containerID="47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da" Oct 14 16:01:44 crc kubenswrapper[4860]: E1014 16:01:44.693861 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da\": container with ID starting with 47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da not found: ID does not exist" containerID="47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.693925 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da"} err="failed to get container status \"47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da\": rpc error: code = NotFound desc = could not find container \"47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da\": container with ID starting with 47808b78e448297a096874810033b53712ac734bcef21dd9285fd3d8573538da not found: ID does not exist" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.693952 4860 scope.go:117] "RemoveContainer" containerID="f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534" Oct 14 16:01:44 crc kubenswrapper[4860]: E1014 16:01:44.694352 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534\": container with ID starting with f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534 not found: ID does not exist" containerID="f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534" Oct 14 16:01:44 crc kubenswrapper[4860]: I1014 16:01:44.694375 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534"} err="failed to get container status \"f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534\": rpc error: code = NotFound desc = could not find container \"f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534\": container with ID starting with f1cdff7b7aae54c2350f12757c9ed6863e15e1e4130db3f0f5b129d75d3fa534 not found: ID does not exist" Oct 14 16:01:45 crc kubenswrapper[4860]: I1014 16:01:45.071778 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" path="/var/lib/kubelet/pods/f4738335-5957-406d-9e61-1eaf329d93cc/volumes" Oct 14 16:01:54 crc kubenswrapper[4860]: I1014 16:01:54.062132 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:01:54 crc kubenswrapper[4860]: E1014 16:01:54.062791 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:02:05 crc kubenswrapper[4860]: I1014 16:02:05.062335 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:02:05 crc kubenswrapper[4860]: E1014 16:02:05.063124 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:02:19 crc kubenswrapper[4860]: I1014 16:02:19.070156 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:02:19 crc kubenswrapper[4860]: E1014 16:02:19.070915 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:02:33 crc kubenswrapper[4860]: I1014 16:02:33.061262 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:02:33 crc kubenswrapper[4860]: E1014 16:02:33.062148 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:02:45 crc kubenswrapper[4860]: I1014 16:02:45.062417 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:02:45 crc kubenswrapper[4860]: E1014 16:02:45.067006 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:02:58 crc kubenswrapper[4860]: I1014 16:02:58.062142 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:02:58 crc kubenswrapper[4860]: E1014 16:02:58.062993 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:03:09 crc kubenswrapper[4860]: I1014 16:03:09.067624 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:03:09 crc kubenswrapper[4860]: E1014 16:03:09.069426 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:03:24 crc kubenswrapper[4860]: I1014 16:03:24.062352 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:03:24 crc kubenswrapper[4860]: E1014 16:03:24.063251 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:03:39 crc kubenswrapper[4860]: I1014 16:03:39.067290 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:03:39 crc kubenswrapper[4860]: E1014 16:03:39.068299 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:03:54 crc kubenswrapper[4860]: I1014 16:03:54.061626 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:03:54 crc kubenswrapper[4860]: E1014 16:03:54.063568 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:04:08 crc kubenswrapper[4860]: I1014 16:04:08.062136 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:04:08 crc kubenswrapper[4860]: E1014 16:04:08.063359 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:04:19 crc kubenswrapper[4860]: I1014 16:04:19.071448 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:04:19 crc kubenswrapper[4860]: E1014 16:04:19.072217 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:04:31 crc kubenswrapper[4860]: I1014 16:04:31.061989 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:04:31 crc kubenswrapper[4860]: E1014 16:04:31.063085 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:04:45 crc kubenswrapper[4860]: I1014 16:04:45.062414 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:04:45 crc kubenswrapper[4860]: E1014 16:04:45.063268 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:04:56 crc kubenswrapper[4860]: I1014 16:04:56.061944 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:04:56 crc kubenswrapper[4860]: E1014 16:04:56.062850 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:05:08 crc kubenswrapper[4860]: I1014 16:05:08.062339 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:05:08 crc kubenswrapper[4860]: I1014 16:05:08.494644 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"2d4e0c136b36c1e0148ea536424775d6f7b84960fe87de84b2552ae4fd21ff48"} Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.838703 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qb8l8"] Oct 14 16:06:32 crc kubenswrapper[4860]: E1014 16:06:32.839598 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="extract-content" Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.839610 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="extract-content" Oct 14 16:06:32 crc kubenswrapper[4860]: E1014 16:06:32.839622 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="registry-server" Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.839628 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="registry-server" Oct 14 16:06:32 crc kubenswrapper[4860]: E1014 16:06:32.839666 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="extract-utilities" Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.839673 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="extract-utilities" Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.839860 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4738335-5957-406d-9e61-1eaf329d93cc" containerName="registry-server" Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.841122 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:32 crc kubenswrapper[4860]: I1014 16:06:32.855383 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qb8l8"] Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.030816 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8qh\" (UniqueName: \"kubernetes.io/projected/3f963a53-0e28-4740-b175-ecc814d4070c-kube-api-access-zt8qh\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.030911 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-utilities\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.030954 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-catalog-content\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.132450 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-catalog-content\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.132839 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8qh\" (UniqueName: \"kubernetes.io/projected/3f963a53-0e28-4740-b175-ecc814d4070c-kube-api-access-zt8qh\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.133082 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-utilities\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.133446 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-catalog-content\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.133558 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-utilities\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.161242 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8qh\" (UniqueName: \"kubernetes.io/projected/3f963a53-0e28-4740-b175-ecc814d4070c-kube-api-access-zt8qh\") pod \"community-operators-qb8l8\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.164420 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:33 crc kubenswrapper[4860]: I1014 16:06:33.758190 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qb8l8"] Oct 14 16:06:34 crc kubenswrapper[4860]: I1014 16:06:34.297864 4860 generic.go:334] "Generic (PLEG): container finished" podID="3f963a53-0e28-4740-b175-ecc814d4070c" containerID="1b25203de82c35a11045ee038544513e53fdd0a11619bbdea701daf912726752" exitCode=0 Oct 14 16:06:34 crc kubenswrapper[4860]: I1014 16:06:34.297906 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerDied","Data":"1b25203de82c35a11045ee038544513e53fdd0a11619bbdea701daf912726752"} Oct 14 16:06:34 crc kubenswrapper[4860]: I1014 16:06:34.297931 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerStarted","Data":"afa90a7ea5d4f65a26d0e1bc386d996d7b4c5b4f36384c7b946f780594c653f6"} Oct 14 16:06:34 crc kubenswrapper[4860]: I1014 16:06:34.301392 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 16:06:36 crc kubenswrapper[4860]: I1014 16:06:36.315122 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerStarted","Data":"877789c0169f5028f5566838acb9af41eaa09f6f3c307e325846b42d89c588dc"} Oct 14 16:06:37 crc kubenswrapper[4860]: I1014 16:06:37.324855 4860 generic.go:334] "Generic (PLEG): container finished" podID="3f963a53-0e28-4740-b175-ecc814d4070c" containerID="877789c0169f5028f5566838acb9af41eaa09f6f3c307e325846b42d89c588dc" exitCode=0 Oct 14 16:06:37 crc kubenswrapper[4860]: I1014 16:06:37.324952 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerDied","Data":"877789c0169f5028f5566838acb9af41eaa09f6f3c307e325846b42d89c588dc"} Oct 14 16:06:38 crc kubenswrapper[4860]: I1014 16:06:38.335392 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerStarted","Data":"93d64435a1d082c1b68ed8ee74e9e56f8ae12973d6ea97b37d80f53c01f85f85"} Oct 14 16:06:38 crc kubenswrapper[4860]: I1014 16:06:38.355070 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qb8l8" podStartSLOduration=2.827652857 podStartE2EDuration="6.355048305s" podCreationTimestamp="2025-10-14 16:06:32 +0000 UTC" firstStartedPulling="2025-10-14 16:06:34.301169359 +0000 UTC m=+4655.887952808" lastFinishedPulling="2025-10-14 16:06:37.828564807 +0000 UTC m=+4659.415348256" observedRunningTime="2025-10-14 16:06:38.349272595 +0000 UTC m=+4659.936056044" watchObservedRunningTime="2025-10-14 16:06:38.355048305 +0000 UTC m=+4659.941831764" Oct 14 16:06:43 crc kubenswrapper[4860]: I1014 16:06:43.164958 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:43 crc kubenswrapper[4860]: I1014 16:06:43.165606 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:43 crc kubenswrapper[4860]: I1014 16:06:43.233173 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:43 crc kubenswrapper[4860]: I1014 16:06:43.428580 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:43 crc kubenswrapper[4860]: I1014 16:06:43.480639 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qb8l8"] Oct 14 16:06:45 crc kubenswrapper[4860]: I1014 16:06:45.406016 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qb8l8" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="registry-server" containerID="cri-o://93d64435a1d082c1b68ed8ee74e9e56f8ae12973d6ea97b37d80f53c01f85f85" gracePeriod=2 Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.416166 4860 generic.go:334] "Generic (PLEG): container finished" podID="3f963a53-0e28-4740-b175-ecc814d4070c" containerID="93d64435a1d082c1b68ed8ee74e9e56f8ae12973d6ea97b37d80f53c01f85f85" exitCode=0 Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.416257 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerDied","Data":"93d64435a1d082c1b68ed8ee74e9e56f8ae12973d6ea97b37d80f53c01f85f85"} Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.526346 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.677821 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-catalog-content\") pod \"3f963a53-0e28-4740-b175-ecc814d4070c\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.677931 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt8qh\" (UniqueName: \"kubernetes.io/projected/3f963a53-0e28-4740-b175-ecc814d4070c-kube-api-access-zt8qh\") pod \"3f963a53-0e28-4740-b175-ecc814d4070c\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.678070 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-utilities\") pod \"3f963a53-0e28-4740-b175-ecc814d4070c\" (UID: \"3f963a53-0e28-4740-b175-ecc814d4070c\") " Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.679262 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-utilities" (OuterVolumeSpecName: "utilities") pod "3f963a53-0e28-4740-b175-ecc814d4070c" (UID: "3f963a53-0e28-4740-b175-ecc814d4070c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.685781 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f963a53-0e28-4740-b175-ecc814d4070c-kube-api-access-zt8qh" (OuterVolumeSpecName: "kube-api-access-zt8qh") pod "3f963a53-0e28-4740-b175-ecc814d4070c" (UID: "3f963a53-0e28-4740-b175-ecc814d4070c"). InnerVolumeSpecName "kube-api-access-zt8qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.728351 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f963a53-0e28-4740-b175-ecc814d4070c" (UID: "3f963a53-0e28-4740-b175-ecc814d4070c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.780000 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.780060 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f963a53-0e28-4740-b175-ecc814d4070c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:06:46 crc kubenswrapper[4860]: I1014 16:06:46.780076 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt8qh\" (UniqueName: \"kubernetes.io/projected/3f963a53-0e28-4740-b175-ecc814d4070c-kube-api-access-zt8qh\") on node \"crc\" DevicePath \"\"" Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.429310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb8l8" event={"ID":"3f963a53-0e28-4740-b175-ecc814d4070c","Type":"ContainerDied","Data":"afa90a7ea5d4f65a26d0e1bc386d996d7b4c5b4f36384c7b946f780594c653f6"} Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.429378 4860 scope.go:117] "RemoveContainer" containerID="93d64435a1d082c1b68ed8ee74e9e56f8ae12973d6ea97b37d80f53c01f85f85" Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.429403 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb8l8" Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.459938 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qb8l8"] Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.464514 4860 scope.go:117] "RemoveContainer" containerID="877789c0169f5028f5566838acb9af41eaa09f6f3c307e325846b42d89c588dc" Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.479590 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qb8l8"] Oct 14 16:06:47 crc kubenswrapper[4860]: I1014 16:06:47.488537 4860 scope.go:117] "RemoveContainer" containerID="1b25203de82c35a11045ee038544513e53fdd0a11619bbdea701daf912726752" Oct 14 16:06:49 crc kubenswrapper[4860]: I1014 16:06:49.074205 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" path="/var/lib/kubelet/pods/3f963a53-0e28-4740-b175-ecc814d4070c/volumes" Oct 14 16:07:29 crc kubenswrapper[4860]: I1014 16:07:29.245852 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:07:29 crc kubenswrapper[4860]: I1014 16:07:29.246559 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.107407 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tbrlt"] Oct 14 16:07:45 crc kubenswrapper[4860]: E1014 16:07:45.108479 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="extract-content" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.108494 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="extract-content" Oct 14 16:07:45 crc kubenswrapper[4860]: E1014 16:07:45.108522 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="extract-utilities" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.108529 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="extract-utilities" Oct 14 16:07:45 crc kubenswrapper[4860]: E1014 16:07:45.108567 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="registry-server" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.108573 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="registry-server" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.109432 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f963a53-0e28-4740-b175-ecc814d4070c" containerName="registry-server" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.120314 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.197601 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbrlt"] Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.266308 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-utilities\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.266363 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-catalog-content\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.266388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4s42\" (UniqueName: \"kubernetes.io/projected/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-kube-api-access-t4s42\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.368075 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-utilities\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.368149 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-catalog-content\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.368188 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4s42\" (UniqueName: \"kubernetes.io/projected/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-kube-api-access-t4s42\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.368591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-utilities\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.368687 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-catalog-content\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.398963 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4s42\" (UniqueName: \"kubernetes.io/projected/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-kube-api-access-t4s42\") pod \"redhat-operators-tbrlt\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:45 crc kubenswrapper[4860]: I1014 16:07:45.500822 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:07:46 crc kubenswrapper[4860]: I1014 16:07:46.015557 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbrlt"] Oct 14 16:07:46 crc kubenswrapper[4860]: W1014 16:07:46.023176 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b8a7b3b_851e_4e0a_a43a_ab8a5e6ac00a.slice/crio-d0a6ab05105fd68692ce22fa514dfad7aa3a1d1c9482a8433fdbf63a8eeee68d WatchSource:0}: Error finding container d0a6ab05105fd68692ce22fa514dfad7aa3a1d1c9482a8433fdbf63a8eeee68d: Status 404 returned error can't find the container with id d0a6ab05105fd68692ce22fa514dfad7aa3a1d1c9482a8433fdbf63a8eeee68d Oct 14 16:07:46 crc kubenswrapper[4860]: I1014 16:07:46.991938 4860 generic.go:334] "Generic (PLEG): container finished" podID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerID="116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5" exitCode=0 Oct 14 16:07:46 crc kubenswrapper[4860]: I1014 16:07:46.991992 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerDied","Data":"116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5"} Oct 14 16:07:46 crc kubenswrapper[4860]: I1014 16:07:46.992480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerStarted","Data":"d0a6ab05105fd68692ce22fa514dfad7aa3a1d1c9482a8433fdbf63a8eeee68d"} Oct 14 16:07:49 crc kubenswrapper[4860]: I1014 16:07:49.011066 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerStarted","Data":"0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e"} Oct 14 16:07:55 crc kubenswrapper[4860]: I1014 16:07:55.104960 4860 generic.go:334] "Generic (PLEG): container finished" podID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerID="0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e" exitCode=0 Oct 14 16:07:55 crc kubenswrapper[4860]: I1014 16:07:55.105050 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerDied","Data":"0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e"} Oct 14 16:07:56 crc kubenswrapper[4860]: I1014 16:07:56.115514 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerStarted","Data":"262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e"} Oct 14 16:07:59 crc kubenswrapper[4860]: I1014 16:07:59.245728 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:07:59 crc kubenswrapper[4860]: I1014 16:07:59.246362 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:08:05 crc kubenswrapper[4860]: I1014 16:08:05.501212 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:08:05 crc kubenswrapper[4860]: I1014 16:08:05.501929 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.226279 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tbrlt" podStartSLOduration=12.69342998 podStartE2EDuration="21.226257918s" podCreationTimestamp="2025-10-14 16:07:45 +0000 UTC" firstStartedPulling="2025-10-14 16:07:46.99407449 +0000 UTC m=+4728.580857939" lastFinishedPulling="2025-10-14 16:07:55.526902388 +0000 UTC m=+4737.113685877" observedRunningTime="2025-10-14 16:07:56.139037717 +0000 UTC m=+4737.725821166" watchObservedRunningTime="2025-10-14 16:08:06.226257918 +0000 UTC m=+4747.813041367" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.231627 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bvmr7"] Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.234812 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.245523 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvmr7"] Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.291246 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-utilities\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.291393 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-catalog-content\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.291585 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4tn\" (UniqueName: \"kubernetes.io/projected/98f8737b-2478-4823-aa11-b1209da36ec0-kube-api-access-sh4tn\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.393628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-utilities\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.393690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-catalog-content\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.393766 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4tn\" (UniqueName: \"kubernetes.io/projected/98f8737b-2478-4823-aa11-b1209da36ec0-kube-api-access-sh4tn\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.394168 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-utilities\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.394262 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-catalog-content\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.416629 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4tn\" (UniqueName: \"kubernetes.io/projected/98f8737b-2478-4823-aa11-b1209da36ec0-kube-api-access-sh4tn\") pod \"redhat-marketplace-bvmr7\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.552977 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:06 crc kubenswrapper[4860]: I1014 16:08:06.581374 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbrlt" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="registry-server" probeResult="failure" output=< Oct 14 16:08:06 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 16:08:06 crc kubenswrapper[4860]: > Oct 14 16:08:07 crc kubenswrapper[4860]: I1014 16:08:07.056254 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvmr7"] Oct 14 16:08:07 crc kubenswrapper[4860]: I1014 16:08:07.216935 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerStarted","Data":"7ad358e6fd6f9718ec67c3a4da20fdb73f229768912496fdc67b35cdb0de641d"} Oct 14 16:08:08 crc kubenswrapper[4860]: I1014 16:08:08.238086 4860 generic.go:334] "Generic (PLEG): container finished" podID="98f8737b-2478-4823-aa11-b1209da36ec0" containerID="e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b" exitCode=0 Oct 14 16:08:08 crc kubenswrapper[4860]: I1014 16:08:08.238485 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerDied","Data":"e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b"} Oct 14 16:08:09 crc kubenswrapper[4860]: I1014 16:08:09.251515 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerStarted","Data":"73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95"} Oct 14 16:08:10 crc kubenswrapper[4860]: I1014 16:08:10.262791 4860 generic.go:334] "Generic (PLEG): container finished" podID="98f8737b-2478-4823-aa11-b1209da36ec0" containerID="73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95" exitCode=0 Oct 14 16:08:10 crc kubenswrapper[4860]: I1014 16:08:10.262864 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerDied","Data":"73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95"} Oct 14 16:08:11 crc kubenswrapper[4860]: I1014 16:08:11.275453 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerStarted","Data":"c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a"} Oct 14 16:08:11 crc kubenswrapper[4860]: I1014 16:08:11.312160 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bvmr7" podStartSLOduration=2.77288704 podStartE2EDuration="5.31213993s" podCreationTimestamp="2025-10-14 16:08:06 +0000 UTC" firstStartedPulling="2025-10-14 16:08:08.242690226 +0000 UTC m=+4749.829473665" lastFinishedPulling="2025-10-14 16:08:10.781943106 +0000 UTC m=+4752.368726555" observedRunningTime="2025-10-14 16:08:11.308496021 +0000 UTC m=+4752.895279530" watchObservedRunningTime="2025-10-14 16:08:11.31213993 +0000 UTC m=+4752.898923379" Oct 14 16:08:15 crc kubenswrapper[4860]: I1014 16:08:15.557522 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:08:15 crc kubenswrapper[4860]: I1014 16:08:15.610329 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:08:16 crc kubenswrapper[4860]: I1014 16:08:16.310174 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbrlt"] Oct 14 16:08:16 crc kubenswrapper[4860]: I1014 16:08:16.553866 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:16 crc kubenswrapper[4860]: I1014 16:08:16.554282 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:16 crc kubenswrapper[4860]: I1014 16:08:16.599679 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.338505 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tbrlt" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="registry-server" containerID="cri-o://262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e" gracePeriod=2 Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.408095 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.828380 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.938660 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-catalog-content\") pod \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.938860 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4s42\" (UniqueName: \"kubernetes.io/projected/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-kube-api-access-t4s42\") pod \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.938917 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-utilities\") pod \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\" (UID: \"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a\") " Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.939824 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-utilities" (OuterVolumeSpecName: "utilities") pod "5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" (UID: "5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:08:17 crc kubenswrapper[4860]: I1014 16:08:17.945750 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-kube-api-access-t4s42" (OuterVolumeSpecName: "kube-api-access-t4s42") pod "5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" (UID: "5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a"). InnerVolumeSpecName "kube-api-access-t4s42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.040803 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4s42\" (UniqueName: \"kubernetes.io/projected/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-kube-api-access-t4s42\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.040842 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.041680 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" (UID: "5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.143558 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.350316 4860 generic.go:334] "Generic (PLEG): container finished" podID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerID="262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e" exitCode=0 Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.350800 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbrlt" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.351395 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerDied","Data":"262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e"} Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.351437 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbrlt" event={"ID":"5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a","Type":"ContainerDied","Data":"d0a6ab05105fd68692ce22fa514dfad7aa3a1d1c9482a8433fdbf63a8eeee68d"} Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.351461 4860 scope.go:117] "RemoveContainer" containerID="262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.383700 4860 scope.go:117] "RemoveContainer" containerID="0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.395819 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbrlt"] Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.415304 4860 scope.go:117] "RemoveContainer" containerID="116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.419893 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tbrlt"] Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.460547 4860 scope.go:117] "RemoveContainer" containerID="262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e" Oct 14 16:08:18 crc kubenswrapper[4860]: E1014 16:08:18.461167 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e\": container with ID starting with 262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e not found: ID does not exist" containerID="262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.461210 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e"} err="failed to get container status \"262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e\": rpc error: code = NotFound desc = could not find container \"262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e\": container with ID starting with 262eee54857cfa84d796c5563ad6fa597815829291cd4869baf8a1d2f98c2c9e not found: ID does not exist" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.461237 4860 scope.go:117] "RemoveContainer" containerID="0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e" Oct 14 16:08:18 crc kubenswrapper[4860]: E1014 16:08:18.462439 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e\": container with ID starting with 0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e not found: ID does not exist" containerID="0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.462488 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e"} err="failed to get container status \"0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e\": rpc error: code = NotFound desc = could not find container \"0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e\": container with ID starting with 0270fdb7c76bb0694477f714440aa38b4eb4303a6f6b47cf4c97c54b4619b43e not found: ID does not exist" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.462525 4860 scope.go:117] "RemoveContainer" containerID="116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5" Oct 14 16:08:18 crc kubenswrapper[4860]: E1014 16:08:18.462779 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5\": container with ID starting with 116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5 not found: ID does not exist" containerID="116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.462797 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5"} err="failed to get container status \"116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5\": rpc error: code = NotFound desc = could not find container \"116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5\": container with ID starting with 116a4095c90125e9ad16b7c5138c891f51cdb29754a84ffe40ec97ec3daa70f5 not found: ID does not exist" Oct 14 16:08:18 crc kubenswrapper[4860]: I1014 16:08:18.908536 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvmr7"] Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.072844 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" path="/var/lib/kubelet/pods/5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a/volumes" Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.365138 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bvmr7" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="registry-server" containerID="cri-o://c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a" gracePeriod=2 Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.945302 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.976655 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-utilities\") pod \"98f8737b-2478-4823-aa11-b1209da36ec0\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.976953 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-catalog-content\") pod \"98f8737b-2478-4823-aa11-b1209da36ec0\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.977010 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh4tn\" (UniqueName: \"kubernetes.io/projected/98f8737b-2478-4823-aa11-b1209da36ec0-kube-api-access-sh4tn\") pod \"98f8737b-2478-4823-aa11-b1209da36ec0\" (UID: \"98f8737b-2478-4823-aa11-b1209da36ec0\") " Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.979582 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-utilities" (OuterVolumeSpecName: "utilities") pod "98f8737b-2478-4823-aa11-b1209da36ec0" (UID: "98f8737b-2478-4823-aa11-b1209da36ec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.983001 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f8737b-2478-4823-aa11-b1209da36ec0-kube-api-access-sh4tn" (OuterVolumeSpecName: "kube-api-access-sh4tn") pod "98f8737b-2478-4823-aa11-b1209da36ec0" (UID: "98f8737b-2478-4823-aa11-b1209da36ec0"). InnerVolumeSpecName "kube-api-access-sh4tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:08:19 crc kubenswrapper[4860]: I1014 16:08:19.999592 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98f8737b-2478-4823-aa11-b1209da36ec0" (UID: "98f8737b-2478-4823-aa11-b1209da36ec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.078825 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.078860 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh4tn\" (UniqueName: \"kubernetes.io/projected/98f8737b-2478-4823-aa11-b1209da36ec0-kube-api-access-sh4tn\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.078871 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f8737b-2478-4823-aa11-b1209da36ec0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.378344 4860 generic.go:334] "Generic (PLEG): container finished" podID="98f8737b-2478-4823-aa11-b1209da36ec0" containerID="c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a" exitCode=0 Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.378395 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerDied","Data":"c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a"} Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.378428 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvmr7" event={"ID":"98f8737b-2478-4823-aa11-b1209da36ec0","Type":"ContainerDied","Data":"7ad358e6fd6f9718ec67c3a4da20fdb73f229768912496fdc67b35cdb0de641d"} Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.378451 4860 scope.go:117] "RemoveContainer" containerID="c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.378613 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvmr7" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.413519 4860 scope.go:117] "RemoveContainer" containerID="73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.418392 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvmr7"] Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.426622 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvmr7"] Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.435832 4860 scope.go:117] "RemoveContainer" containerID="e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.472436 4860 scope.go:117] "RemoveContainer" containerID="c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a" Oct 14 16:08:20 crc kubenswrapper[4860]: E1014 16:08:20.472901 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a\": container with ID starting with c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a not found: ID does not exist" containerID="c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.472931 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a"} err="failed to get container status \"c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a\": rpc error: code = NotFound desc = could not find container \"c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a\": container with ID starting with c487963f32ec9a279a2409a2efb2c17c2aac6459ba6db962500f4625eab15c5a not found: ID does not exist" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.472949 4860 scope.go:117] "RemoveContainer" containerID="73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95" Oct 14 16:08:20 crc kubenswrapper[4860]: E1014 16:08:20.473317 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95\": container with ID starting with 73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95 not found: ID does not exist" containerID="73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.473372 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95"} err="failed to get container status \"73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95\": rpc error: code = NotFound desc = could not find container \"73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95\": container with ID starting with 73e874b27bb66360cd938e0b4a402f66369d97f077f7ea877f4260aea3c48a95 not found: ID does not exist" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.473411 4860 scope.go:117] "RemoveContainer" containerID="e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b" Oct 14 16:08:20 crc kubenswrapper[4860]: E1014 16:08:20.473776 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b\": container with ID starting with e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b not found: ID does not exist" containerID="e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b" Oct 14 16:08:20 crc kubenswrapper[4860]: I1014 16:08:20.473858 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b"} err="failed to get container status \"e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b\": rpc error: code = NotFound desc = could not find container \"e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b\": container with ID starting with e68aeeb46e5792d9edc25712df4fee63c65101d8bff09cee4b5a1ac3cd61682b not found: ID does not exist" Oct 14 16:08:21 crc kubenswrapper[4860]: I1014 16:08:21.076934 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" path="/var/lib/kubelet/pods/98f8737b-2478-4823-aa11-b1209da36ec0/volumes" Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.245390 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.245993 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.246069 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.246861 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d4e0c136b36c1e0148ea536424775d6f7b84960fe87de84b2552ae4fd21ff48"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.246913 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://2d4e0c136b36c1e0148ea536424775d6f7b84960fe87de84b2552ae4fd21ff48" gracePeriod=600 Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.500960 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="2d4e0c136b36c1e0148ea536424775d6f7b84960fe87de84b2552ae4fd21ff48" exitCode=0 Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.501151 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"2d4e0c136b36c1e0148ea536424775d6f7b84960fe87de84b2552ae4fd21ff48"} Oct 14 16:08:29 crc kubenswrapper[4860]: I1014 16:08:29.501331 4860 scope.go:117] "RemoveContainer" containerID="d0db144a5d4540944b5b9d5149c6383ea2c94434419797dd51b61bf2d4a52b82" Oct 14 16:08:30 crc kubenswrapper[4860]: I1014 16:08:30.534171 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce"} Oct 14 16:08:53 crc kubenswrapper[4860]: I1014 16:08:53.778108 4860 generic.go:334] "Generic (PLEG): container finished" podID="ccedbfab-f66d-49a5-baac-50c603e57c98" containerID="635c7a3e77d8f3674f0d6a0dded9e523675e9241f6b476d42d5dca41b7fa133d" exitCode=0 Oct 14 16:08:53 crc kubenswrapper[4860]: I1014 16:08:53.778215 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ccedbfab-f66d-49a5-baac-50c603e57c98","Type":"ContainerDied","Data":"635c7a3e77d8f3674f0d6a0dded9e523675e9241f6b476d42d5dca41b7fa133d"} Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.193059 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326438 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326533 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config-secret\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326569 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-config-data\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-temporary\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326714 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ssh-key\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326739 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-workdir\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326802 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8v28\" (UniqueName: \"kubernetes.io/projected/ccedbfab-f66d-49a5-baac-50c603e57c98-kube-api-access-b8v28\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.326894 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ca-certs\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.327001 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config\") pod \"ccedbfab-f66d-49a5-baac-50c603e57c98\" (UID: \"ccedbfab-f66d-49a5-baac-50c603e57c98\") " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.327600 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-config-data" (OuterVolumeSpecName: "config-data") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.327882 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.328424 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.328456 4860 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.332488 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.334637 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.347072 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccedbfab-f66d-49a5-baac-50c603e57c98-kube-api-access-b8v28" (OuterVolumeSpecName: "kube-api-access-b8v28") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "kube-api-access-b8v28". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.367194 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.369177 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.369912 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.403132 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ccedbfab-f66d-49a5-baac-50c603e57c98" (UID: "ccedbfab-f66d-49a5-baac-50c603e57c98"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430118 4860 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430198 4860 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ccedbfab-f66d-49a5-baac-50c603e57c98-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430215 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8v28\" (UniqueName: \"kubernetes.io/projected/ccedbfab-f66d-49a5-baac-50c603e57c98-kube-api-access-b8v28\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430229 4860 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430274 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430308 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.430324 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ccedbfab-f66d-49a5-baac-50c603e57c98-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.466599 4860 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.532589 4860 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.800695 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ccedbfab-f66d-49a5-baac-50c603e57c98","Type":"ContainerDied","Data":"89d3f6a58deafe2f41a31cbc357b89f57ac8cb38317cc1b61e15c35b30b20aff"} Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.800758 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d3f6a58deafe2f41a31cbc357b89f57ac8cb38317cc1b61e15c35b30b20aff" Oct 14 16:08:55 crc kubenswrapper[4860]: I1014 16:08:55.800864 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.678834 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679782 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="registry-server" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679799 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="registry-server" Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679826 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="extract-content" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679840 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="extract-content" Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679858 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccedbfab-f66d-49a5-baac-50c603e57c98" containerName="tempest-tests-tempest-tests-runner" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679867 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccedbfab-f66d-49a5-baac-50c603e57c98" containerName="tempest-tests-tempest-tests-runner" Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679887 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="extract-utilities" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679895 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="extract-utilities" Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679920 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="extract-content" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679928 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="extract-content" Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679950 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="registry-server" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679958 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="registry-server" Oct 14 16:09:00 crc kubenswrapper[4860]: E1014 16:09:00.679968 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="extract-utilities" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.679986 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="extract-utilities" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.680273 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f8737b-2478-4823-aa11-b1209da36ec0" containerName="registry-server" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.680297 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccedbfab-f66d-49a5-baac-50c603e57c98" containerName="tempest-tests-tempest-tests-runner" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.680311 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8a7b3b-851e-4e0a-a43a-ab8a5e6ac00a" containerName="registry-server" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.681114 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.683323 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mlp24" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.693505 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.759936 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.760318 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqd9\" (UniqueName: \"kubernetes.io/projected/b9379888-4451-403a-adb4-c9b17890351a-kube-api-access-cwqd9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.862255 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqd9\" (UniqueName: \"kubernetes.io/projected/b9379888-4451-403a-adb4-c9b17890351a-kube-api-access-cwqd9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.862409 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.862943 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.898776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:00 crc kubenswrapper[4860]: I1014 16:09:00.917784 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqd9\" (UniqueName: \"kubernetes.io/projected/b9379888-4451-403a-adb4-c9b17890351a-kube-api-access-cwqd9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9379888-4451-403a-adb4-c9b17890351a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:01 crc kubenswrapper[4860]: I1014 16:09:01.000639 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 16:09:01 crc kubenswrapper[4860]: I1014 16:09:01.473914 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 16:09:01 crc kubenswrapper[4860]: I1014 16:09:01.880389 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b9379888-4451-403a-adb4-c9b17890351a","Type":"ContainerStarted","Data":"af228e56a3bd0bc9e9b41068049d894037e67e017d8d41641faed71625beb090"} Oct 14 16:09:03 crc kubenswrapper[4860]: I1014 16:09:03.898057 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b9379888-4451-403a-adb4-c9b17890351a","Type":"ContainerStarted","Data":"2839218ca646007c18d91ae971c446ac459ba709490ec6e9f004d683e28f84a7"} Oct 14 16:09:03 crc kubenswrapper[4860]: I1014 16:09:03.916738 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.496784108 podStartE2EDuration="3.916715002s" podCreationTimestamp="2025-10-14 16:09:00 +0000 UTC" firstStartedPulling="2025-10-14 16:09:01.482845421 +0000 UTC m=+4803.069628870" lastFinishedPulling="2025-10-14 16:09:02.902776315 +0000 UTC m=+4804.489559764" observedRunningTime="2025-10-14 16:09:03.910730826 +0000 UTC m=+4805.497514295" watchObservedRunningTime="2025-10-14 16:09:03.916715002 +0000 UTC m=+4805.503498481" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.619598 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mnblv/must-gather-m8c5t"] Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.621725 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.626169 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mnblv"/"kube-root-ca.crt" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.626585 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mnblv"/"default-dockercfg-7hm76" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.629780 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mnblv"/"openshift-service-ca.crt" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.638725 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mnblv/must-gather-m8c5t"] Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.662775 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7tb6\" (UniqueName: \"kubernetes.io/projected/b99aedca-914d-47ab-8261-a05253cc09df-kube-api-access-w7tb6\") pod \"must-gather-m8c5t\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.662820 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b99aedca-914d-47ab-8261-a05253cc09df-must-gather-output\") pod \"must-gather-m8c5t\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.764634 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7tb6\" (UniqueName: \"kubernetes.io/projected/b99aedca-914d-47ab-8261-a05253cc09df-kube-api-access-w7tb6\") pod \"must-gather-m8c5t\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.765819 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b99aedca-914d-47ab-8261-a05253cc09df-must-gather-output\") pod \"must-gather-m8c5t\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.766299 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b99aedca-914d-47ab-8261-a05253cc09df-must-gather-output\") pod \"must-gather-m8c5t\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.789184 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7tb6\" (UniqueName: \"kubernetes.io/projected/b99aedca-914d-47ab-8261-a05253cc09df-kube-api-access-w7tb6\") pod \"must-gather-m8c5t\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:20 crc kubenswrapper[4860]: I1014 16:09:20.938841 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:09:21 crc kubenswrapper[4860]: I1014 16:09:21.445887 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mnblv/must-gather-m8c5t"] Oct 14 16:09:22 crc kubenswrapper[4860]: I1014 16:09:22.075919 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/must-gather-m8c5t" event={"ID":"b99aedca-914d-47ab-8261-a05253cc09df","Type":"ContainerStarted","Data":"c7ff36eaf89cb8355ec4bab5c49299a32d5b044597635968ac9ed310802cb551"} Oct 14 16:09:27 crc kubenswrapper[4860]: I1014 16:09:27.129883 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/must-gather-m8c5t" event={"ID":"b99aedca-914d-47ab-8261-a05253cc09df","Type":"ContainerStarted","Data":"589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f"} Oct 14 16:09:27 crc kubenswrapper[4860]: I1014 16:09:27.130450 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/must-gather-m8c5t" event={"ID":"b99aedca-914d-47ab-8261-a05253cc09df","Type":"ContainerStarted","Data":"a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c"} Oct 14 16:09:27 crc kubenswrapper[4860]: I1014 16:09:27.146647 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mnblv/must-gather-m8c5t" podStartSLOduration=2.218535156 podStartE2EDuration="7.146624943s" podCreationTimestamp="2025-10-14 16:09:20 +0000 UTC" firstStartedPulling="2025-10-14 16:09:21.456625692 +0000 UTC m=+4823.043409141" lastFinishedPulling="2025-10-14 16:09:26.384715479 +0000 UTC m=+4827.971498928" observedRunningTime="2025-10-14 16:09:27.142464892 +0000 UTC m=+4828.729248341" watchObservedRunningTime="2025-10-14 16:09:27.146624943 +0000 UTC m=+4828.733408412" Oct 14 16:09:30 crc kubenswrapper[4860]: E1014 16:09:30.890928 4860 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:47094->38.102.83.179:42131: write tcp 38.102.83.179:47094->38.102.83.179:42131: write: broken pipe Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.514772 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mnblv/crc-debug-xv7z6"] Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.516213 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.612408 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/037920f2-6072-4380-bb50-120a0b4ac474-host\") pod \"crc-debug-xv7z6\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.612812 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt48h\" (UniqueName: \"kubernetes.io/projected/037920f2-6072-4380-bb50-120a0b4ac474-kube-api-access-vt48h\") pod \"crc-debug-xv7z6\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.714744 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt48h\" (UniqueName: \"kubernetes.io/projected/037920f2-6072-4380-bb50-120a0b4ac474-kube-api-access-vt48h\") pod \"crc-debug-xv7z6\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.714833 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/037920f2-6072-4380-bb50-120a0b4ac474-host\") pod \"crc-debug-xv7z6\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.714964 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/037920f2-6072-4380-bb50-120a0b4ac474-host\") pod \"crc-debug-xv7z6\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.741458 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt48h\" (UniqueName: \"kubernetes.io/projected/037920f2-6072-4380-bb50-120a0b4ac474-kube-api-access-vt48h\") pod \"crc-debug-xv7z6\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:32 crc kubenswrapper[4860]: I1014 16:09:32.833530 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:09:33 crc kubenswrapper[4860]: I1014 16:09:33.190377 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" event={"ID":"037920f2-6072-4380-bb50-120a0b4ac474","Type":"ContainerStarted","Data":"521e41afb52ef0b71b7a2e3a18e6c58adfcbf0261f2d07d48a5d07288044eaaf"} Oct 14 16:09:44 crc kubenswrapper[4860]: I1014 16:09:44.817502 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="84bf98f8-38a7-469a-a6ce-f3b573aa1356" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:44 crc kubenswrapper[4860]: I1014 16:09:44.818354 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0619b1f4-ea36-41ab-a97b-2a97d516e53c" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:44 crc kubenswrapper[4860]: I1014 16:09:44.818354 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="84bf98f8-38a7-469a-a6ce-f3b573aa1356" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:44 crc kubenswrapper[4860]: I1014 16:09:44.819583 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0619b1f4-ea36-41ab-a97b-2a97d516e53c" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:50 crc kubenswrapper[4860]: I1014 16:09:50.822776 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 14 16:09:51 crc kubenswrapper[4860]: I1014 16:09:51.120219 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7545s" podUID="63f2d00d-6dad-48ec-91c9-33ba7f88c5f2" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 16:09:52 crc kubenswrapper[4860]: I1014 16:09:52.819105 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0619b1f4-ea36-41ab-a97b-2a97d516e53c" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:52 crc kubenswrapper[4860]: I1014 16:09:52.819702 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0619b1f4-ea36-41ab-a97b-2a97d516e53c" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:54 crc kubenswrapper[4860]: I1014 16:09:54.818243 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="84bf98f8-38a7-469a-a6ce-f3b573aa1356" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:54 crc kubenswrapper[4860]: I1014 16:09:54.819719 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="84bf98f8-38a7-469a-a6ce-f3b573aa1356" containerName="galera" probeResult="failure" output="command timed out" Oct 14 16:09:55 crc kubenswrapper[4860]: I1014 16:09:55.817911 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 14 16:09:59 crc kubenswrapper[4860]: E1014 16:09:59.730673 4860 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Oct 14 16:09:59 crc kubenswrapper[4860]: E1014 16:09:59.736332 4860 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{ceilometer-0.186e6765b1f5a72b openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee,APIVersion:v1,ResourceVersion:45757,FieldPath:spec.containers{ceilometer-central-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-14 16:09:50.823466795 +0000 UTC m=+4852.410250244,LastTimestamp:2025-10-14 16:09:50.823466795 +0000 UTC m=+4852.410250244,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 14 16:09:59 crc kubenswrapper[4860]: I1014 16:09:59.888250 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="6922ab3e-5c2c-43d1-8b29-824fd8c4146c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.196:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 16:09:59 crc kubenswrapper[4860]: I1014 16:09:59.943969 4860 trace.go:236] Trace[1393709526]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-lm2zv" (14-Oct-2025 16:09:56.946) (total time: 2947ms): Oct 14 16:09:59 crc kubenswrapper[4860]: Trace[1393709526]: [2.947777777s] [2.947777777s] END Oct 14 16:09:59 crc kubenswrapper[4860]: E1014 16:09:59.958443 4860 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": the object has been modified; please apply your changes to the latest version and try again" Oct 14 16:10:05 crc kubenswrapper[4860]: E1014 16:10:05.207037 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Oct 14 16:10:05 crc kubenswrapper[4860]: E1014 16:10:05.213014 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vt48h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-xv7z6_openshift-must-gather-mnblv(037920f2-6072-4380-bb50-120a0b4ac474): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 16:10:05 crc kubenswrapper[4860]: E1014 16:10:05.215135 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" podUID="037920f2-6072-4380-bb50-120a0b4ac474" Oct 14 16:10:05 crc kubenswrapper[4860]: E1014 16:10:05.523774 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" podUID="037920f2-6072-4380-bb50-120a0b4ac474" Oct 14 16:10:19 crc kubenswrapper[4860]: I1014 16:10:19.662798 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" event={"ID":"037920f2-6072-4380-bb50-120a0b4ac474","Type":"ContainerStarted","Data":"24defde1e3a8ac87a58d4a616e947e995cf441b161342e68a76c1691589044e5"} Oct 14 16:10:19 crc kubenswrapper[4860]: I1014 16:10:19.682812 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" podStartSLOduration=1.918160459 podStartE2EDuration="47.682787382s" podCreationTimestamp="2025-10-14 16:09:32 +0000 UTC" firstStartedPulling="2025-10-14 16:09:32.872249269 +0000 UTC m=+4834.459032718" lastFinishedPulling="2025-10-14 16:10:18.636876192 +0000 UTC m=+4880.223659641" observedRunningTime="2025-10-14 16:10:19.675924125 +0000 UTC m=+4881.262707564" watchObservedRunningTime="2025-10-14 16:10:19.682787382 +0000 UTC m=+4881.269570831" Oct 14 16:10:29 crc kubenswrapper[4860]: I1014 16:10:29.245324 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:10:29 crc kubenswrapper[4860]: I1014 16:10:29.247128 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:10:59 crc kubenswrapper[4860]: I1014 16:10:59.245740 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:10:59 crc kubenswrapper[4860]: I1014 16:10:59.246419 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:11:14 crc kubenswrapper[4860]: I1014 16:11:14.553147 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-777489d894-44kqm_c1b85a60-532b-442f-ab52-86a88e9e2400/barbican-api/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.002409 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-777489d894-44kqm_c1b85a60-532b-442f-ab52-86a88e9e2400/barbican-api-log/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.090899 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8647888b98-65v2r_ff1ff7d7-b307-4f43-a76a-09da21f5fd05/barbican-keystone-listener/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.202792 4860 generic.go:334] "Generic (PLEG): container finished" podID="037920f2-6072-4380-bb50-120a0b4ac474" containerID="24defde1e3a8ac87a58d4a616e947e995cf441b161342e68a76c1691589044e5" exitCode=0 Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.202832 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" event={"ID":"037920f2-6072-4380-bb50-120a0b4ac474","Type":"ContainerDied","Data":"24defde1e3a8ac87a58d4a616e947e995cf441b161342e68a76c1691589044e5"} Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.282647 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8647888b98-65v2r_ff1ff7d7-b307-4f43-a76a-09da21f5fd05/barbican-keystone-listener-log/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.379954 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-98dc5ccc5-l88l9_ef6678e8-7116-4dc1-a7cd-420317d521eb/barbican-worker/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.564834 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-98dc5ccc5-l88l9_ef6678e8-7116-4dc1-a7cd-420317d521eb/barbican-worker-log/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.653344 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j_18e270ba-e48c-4f9e-bc6a-8269b31f5698/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.885198 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/ceilometer-notification-agent/0.log" Oct 14 16:11:15 crc kubenswrapper[4860]: I1014 16:11:15.894115 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/ceilometer-central-agent/0.log" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.077398 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/proxy-httpd/0.log" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.168050 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/sg-core/0.log" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.320487 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.352567 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mnblv/crc-debug-xv7z6"] Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.359057 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mnblv/crc-debug-xv7z6"] Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.396189 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/037920f2-6072-4380-bb50-120a0b4ac474-host\") pod \"037920f2-6072-4380-bb50-120a0b4ac474\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.396333 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt48h\" (UniqueName: \"kubernetes.io/projected/037920f2-6072-4380-bb50-120a0b4ac474-kube-api-access-vt48h\") pod \"037920f2-6072-4380-bb50-120a0b4ac474\" (UID: \"037920f2-6072-4380-bb50-120a0b4ac474\") " Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.397843 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/037920f2-6072-4380-bb50-120a0b4ac474-host" (OuterVolumeSpecName: "host") pod "037920f2-6072-4380-bb50-120a0b4ac474" (UID: "037920f2-6072-4380-bb50-120a0b4ac474"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.401983 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c1c38bae-5346-4f5a-ad7c-24f82dd147cf/cinder-api/0.log" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.420224 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037920f2-6072-4380-bb50-120a0b4ac474-kube-api-access-vt48h" (OuterVolumeSpecName: "kube-api-access-vt48h") pod "037920f2-6072-4380-bb50-120a0b4ac474" (UID: "037920f2-6072-4380-bb50-120a0b4ac474"). InnerVolumeSpecName "kube-api-access-vt48h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.480094 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c1c38bae-5346-4f5a-ad7c-24f82dd147cf/cinder-api-log/0.log" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.499202 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/037920f2-6072-4380-bb50-120a0b4ac474-host\") on node \"crc\" DevicePath \"\"" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.499253 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt48h\" (UniqueName: \"kubernetes.io/projected/037920f2-6072-4380-bb50-120a0b4ac474-kube-api-access-vt48h\") on node \"crc\" DevicePath \"\"" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.697180 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_33f4677b-3c11-4662-9129-35805ee9cab0/cinder-scheduler/0.log" Oct 14 16:11:16 crc kubenswrapper[4860]: I1014 16:11:16.826832 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_33f4677b-3c11-4662-9129-35805ee9cab0/probe/0.log" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.000818 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq_72789ed5-d4cd-4245-ad23-5114f65ab462/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.071814 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037920f2-6072-4380-bb50-120a0b4ac474" path="/var/lib/kubelet/pods/037920f2-6072-4380-bb50-120a0b4ac474/volumes" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.220644 4860 scope.go:117] "RemoveContainer" containerID="24defde1e3a8ac87a58d4a616e947e995cf441b161342e68a76c1691589044e5" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.220667 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-xv7z6" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.303923 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6_fd03522b-4930-4c43-ae91-76bd6891424a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.377618 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk_8815aac7-80df-436c-ad49-c49907b6ed3c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.573877 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-bh2nm_2973f190-e42c-4031-9746-70704bafe957/init/0.log" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.710357 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mnblv/crc-debug-hkn5s"] Oct 14 16:11:17 crc kubenswrapper[4860]: E1014 16:11:17.711013 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037920f2-6072-4380-bb50-120a0b4ac474" containerName="container-00" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.711039 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="037920f2-6072-4380-bb50-120a0b4ac474" containerName="container-00" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.711247 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="037920f2-6072-4380-bb50-120a0b4ac474" containerName="container-00" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.711873 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.824655 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becdcda7-fa7f-4f5d-a519-a36b630a7333-host\") pod \"crc-debug-hkn5s\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.825099 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tccmn\" (UniqueName: \"kubernetes.io/projected/becdcda7-fa7f-4f5d-a519-a36b630a7333-kube-api-access-tccmn\") pod \"crc-debug-hkn5s\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.883780 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-bh2nm_2973f190-e42c-4031-9746-70704bafe957/init/0.log" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.926484 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tccmn\" (UniqueName: \"kubernetes.io/projected/becdcda7-fa7f-4f5d-a519-a36b630a7333-kube-api-access-tccmn\") pod \"crc-debug-hkn5s\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.926536 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becdcda7-fa7f-4f5d-a519-a36b630a7333-host\") pod \"crc-debug-hkn5s\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.926673 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becdcda7-fa7f-4f5d-a519-a36b630a7333-host\") pod \"crc-debug-hkn5s\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:17 crc kubenswrapper[4860]: I1014 16:11:17.948004 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tccmn\" (UniqueName: \"kubernetes.io/projected/becdcda7-fa7f-4f5d-a519-a36b630a7333-kube-api-access-tccmn\") pod \"crc-debug-hkn5s\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.000854 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-bh2nm_2973f190-e42c-4031-9746-70704bafe957/dnsmasq-dns/0.log" Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.025403 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.248533 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" event={"ID":"becdcda7-fa7f-4f5d-a519-a36b630a7333","Type":"ContainerStarted","Data":"03f7e56aaed9b0df2857a5d244d97fa81d7034614569bf18e80477f1d108f1f8"} Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.269818 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq_3b6f14ce-02b7-4b0c-91f7-de180b724b23/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.369289 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263/glance-log/0.log" Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.711069 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263/glance-httpd/0.log" Oct 14 16:11:18 crc kubenswrapper[4860]: I1014 16:11:18.954718 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9eea5159-5fa7-4ef7-a7c3-4f98d05085e3/glance-log/0.log" Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.037234 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9eea5159-5fa7-4ef7-a7c3-4f98d05085e3/glance-httpd/0.log" Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.081608 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8795558b4-cgsrj_ba50439f-28b5-4b76-9afb-b705c4037f8d/horizon/1.log" Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.256998 4860 generic.go:334] "Generic (PLEG): container finished" podID="becdcda7-fa7f-4f5d-a519-a36b630a7333" containerID="bfe607d8cf4464fef7433c73604268a819ca87d83899a6e572446ba0222603bd" exitCode=0 Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.257093 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" event={"ID":"becdcda7-fa7f-4f5d-a519-a36b630a7333","Type":"ContainerDied","Data":"bfe607d8cf4464fef7433c73604268a819ca87d83899a6e572446ba0222603bd"} Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.275826 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8795558b4-cgsrj_ba50439f-28b5-4b76-9afb-b705c4037f8d/horizon/0.log" Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.399534 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk_21809a83-1209-4a97-a550-1dfcccd04ec3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:19 crc kubenswrapper[4860]: I1014 16:11:19.743110 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dq8ms_1e540b72-fca1-4c14-8830-8fa070543f8c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.131628 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8795558b4-cgsrj_ba50439f-28b5-4b76-9afb-b705c4037f8d/horizon-log/0.log" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.393719 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340961-lsqnc_e39034f1-fd48-4b12-a14c-55abc2828764/keystone-cron/0.log" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.433007 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.490312 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tccmn\" (UniqueName: \"kubernetes.io/projected/becdcda7-fa7f-4f5d-a519-a36b630a7333-kube-api-access-tccmn\") pod \"becdcda7-fa7f-4f5d-a519-a36b630a7333\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.490469 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becdcda7-fa7f-4f5d-a519-a36b630a7333-host\") pod \"becdcda7-fa7f-4f5d-a519-a36b630a7333\" (UID: \"becdcda7-fa7f-4f5d-a519-a36b630a7333\") " Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.491159 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becdcda7-fa7f-4f5d-a519-a36b630a7333-host" (OuterVolumeSpecName: "host") pod "becdcda7-fa7f-4f5d-a519-a36b630a7333" (UID: "becdcda7-fa7f-4f5d-a519-a36b630a7333"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.505286 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becdcda7-fa7f-4f5d-a519-a36b630a7333-kube-api-access-tccmn" (OuterVolumeSpecName: "kube-api-access-tccmn") pod "becdcda7-fa7f-4f5d-a519-a36b630a7333" (UID: "becdcda7-fa7f-4f5d-a519-a36b630a7333"). InnerVolumeSpecName "kube-api-access-tccmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.592412 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becdcda7-fa7f-4f5d-a519-a36b630a7333-host\") on node \"crc\" DevicePath \"\"" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.592441 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tccmn\" (UniqueName: \"kubernetes.io/projected/becdcda7-fa7f-4f5d-a519-a36b630a7333-kube-api-access-tccmn\") on node \"crc\" DevicePath \"\"" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.688627 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6922ab3e-5c2c-43d1-8b29-824fd8c4146c/kube-state-metrics/0.log" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.753582 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79ffbddbb5-96v5k_17bac919-7f29-4225-967b-1001b22075b4/keystone-api/0.log" Oct 14 16:11:20 crc kubenswrapper[4860]: I1014 16:11:20.844460 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb_ad612cd6-7c9d-44c4-aa1e-33055de4eee6/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:21 crc kubenswrapper[4860]: I1014 16:11:21.284908 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" event={"ID":"becdcda7-fa7f-4f5d-a519-a36b630a7333","Type":"ContainerDied","Data":"03f7e56aaed9b0df2857a5d244d97fa81d7034614569bf18e80477f1d108f1f8"} Oct 14 16:11:21 crc kubenswrapper[4860]: I1014 16:11:21.284947 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f7e56aaed9b0df2857a5d244d97fa81d7034614569bf18e80477f1d108f1f8" Oct 14 16:11:21 crc kubenswrapper[4860]: I1014 16:11:21.285004 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-hkn5s" Oct 14 16:11:21 crc kubenswrapper[4860]: I1014 16:11:21.780079 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4bf5b577-882p6_87973523-835b-4676-babb-8ed122fa8b93/neutron-httpd/0.log" Oct 14 16:11:21 crc kubenswrapper[4860]: I1014 16:11:21.919148 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4bf5b577-882p6_87973523-835b-4676-babb-8ed122fa8b93/neutron-api/0.log" Oct 14 16:11:22 crc kubenswrapper[4860]: I1014 16:11:22.000681 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl_3601c2b8-7185-42fa-bbe1-b0e6b1e07332/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:22 crc kubenswrapper[4860]: I1014 16:11:22.920165 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0dd800a1-57e1-4a3b-994b-304c941b9e5e/nova-cell0-conductor-conductor/0.log" Oct 14 16:11:23 crc kubenswrapper[4860]: I1014 16:11:23.612252 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_37eab7d3-1474-46a2-85f7-9f874511aea2/nova-cell1-conductor-conductor/0.log" Oct 14 16:11:23 crc kubenswrapper[4860]: I1014 16:11:23.762825 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_be5646ba-6f94-4628-85ef-5091fee066d5/nova-api-log/0.log" Oct 14 16:11:23 crc kubenswrapper[4860]: I1014 16:11:23.842248 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mnblv/crc-debug-hkn5s"] Oct 14 16:11:23 crc kubenswrapper[4860]: I1014 16:11:23.853207 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mnblv/crc-debug-hkn5s"] Oct 14 16:11:24 crc kubenswrapper[4860]: I1014 16:11:24.055510 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_be5646ba-6f94-4628-85ef-5091fee066d5/nova-api-api/0.log" Oct 14 16:11:24 crc kubenswrapper[4860]: I1014 16:11:24.095176 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d40bc087-e558-4107-8e76-b5daa3ff73c1/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 16:11:24 crc kubenswrapper[4860]: I1014 16:11:24.237211 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_03a44669-ea47-471b-a369-93f6f85bec6b/memcached/0.log" Oct 14 16:11:24 crc kubenswrapper[4860]: I1014 16:11:24.415066 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-52bv4_5ea863c9-1241-4529-b07a-7ded53a8a9ca/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:24 crc kubenswrapper[4860]: I1014 16:11:24.474368 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5/nova-metadata-log/0.log" Oct 14 16:11:24 crc kubenswrapper[4860]: I1014 16:11:24.931585 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_84bf98f8-38a7-469a-a6ce-f3b573aa1356/mysql-bootstrap/0.log" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.076497 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becdcda7-fa7f-4f5d-a519-a36b630a7333" path="/var/lib/kubelet/pods/becdcda7-fa7f-4f5d-a519-a36b630a7333/volumes" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.122244 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mnblv/crc-debug-pw2g5"] Oct 14 16:11:25 crc kubenswrapper[4860]: E1014 16:11:25.122622 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becdcda7-fa7f-4f5d-a519-a36b630a7333" containerName="container-00" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.122639 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="becdcda7-fa7f-4f5d-a519-a36b630a7333" containerName="container-00" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.122842 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="becdcda7-fa7f-4f5d-a519-a36b630a7333" containerName="container-00" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.123486 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.170755 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_edadf2e8-459f-4994-a1f8-a059cbdb46c6/nova-scheduler-scheduler/0.log" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.176328 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/979e1edc-d9fd-4508-abb8-19aea56540ca-host\") pod \"crc-debug-pw2g5\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.176480 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74mr\" (UniqueName: \"kubernetes.io/projected/979e1edc-d9fd-4508-abb8-19aea56540ca-kube-api-access-n74mr\") pod \"crc-debug-pw2g5\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.237233 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_84bf98f8-38a7-469a-a6ce-f3b573aa1356/mysql-bootstrap/0.log" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.278141 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/979e1edc-d9fd-4508-abb8-19aea56540ca-host\") pod \"crc-debug-pw2g5\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.278534 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74mr\" (UniqueName: \"kubernetes.io/projected/979e1edc-d9fd-4508-abb8-19aea56540ca-kube-api-access-n74mr\") pod \"crc-debug-pw2g5\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.278268 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/979e1edc-d9fd-4508-abb8-19aea56540ca-host\") pod \"crc-debug-pw2g5\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.308960 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74mr\" (UniqueName: \"kubernetes.io/projected/979e1edc-d9fd-4508-abb8-19aea56540ca-kube-api-access-n74mr\") pod \"crc-debug-pw2g5\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.406884 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_84bf98f8-38a7-469a-a6ce-f3b573aa1356/galera/0.log" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.449273 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:25 crc kubenswrapper[4860]: I1014 16:11:25.729954 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5/nova-metadata-metadata/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.000987 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0619b1f4-ea36-41ab-a97b-2a97d516e53c/mysql-bootstrap/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.065456 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0619b1f4-ea36-41ab-a97b-2a97d516e53c/mysql-bootstrap/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.108949 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0619b1f4-ea36-41ab-a97b-2a97d516e53c/galera/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.267507 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0923e67e-dcfe-48bd-9987-c24810447a3e/openstackclient/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.330150 4860 generic.go:334] "Generic (PLEG): container finished" podID="979e1edc-d9fd-4508-abb8-19aea56540ca" containerID="319892a136839aa189970621d3630471ff40e09bb4bb5ae2aea4599a80091904" exitCode=0 Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.330203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-pw2g5" event={"ID":"979e1edc-d9fd-4508-abb8-19aea56540ca","Type":"ContainerDied","Data":"319892a136839aa189970621d3630471ff40e09bb4bb5ae2aea4599a80091904"} Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.330239 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/crc-debug-pw2g5" event={"ID":"979e1edc-d9fd-4508-abb8-19aea56540ca","Type":"ContainerStarted","Data":"2c3b73ba506210a84dc4fe4ddd114cd623f7c3dbbfeede54110beb8972410cf7"} Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.344920 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s4vnv_cb8d65af-6ce5-4a61-ad15-c32aeb71c190/openstack-network-exporter/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.360589 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mnblv/crc-debug-pw2g5"] Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.370508 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mnblv/crc-debug-pw2g5"] Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.519641 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovsdb-server-init/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.646933 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovs-vswitchd/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.737372 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovsdb-server-init/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.744561 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovsdb-server/0.log" Oct 14 16:11:26 crc kubenswrapper[4860]: I1014 16:11:26.769959 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sc6wm_8fbd86ca-1d38-4b27-bd36-62198c367b3d/ovn-controller/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.110974 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jm89w_758f6aec-34fc-48fc-a6bb-f6ac287a02d0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.170416 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9be28925-a379-4ecf-8021-5a16dbd9b666/openstack-network-exporter/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.248871 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9be28925-a379-4ecf-8021-5a16dbd9b666/ovn-northd/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.388413 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9ea3e827-d3d5-481d-b8f6-90b20be97f2e/openstack-network-exporter/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.439092 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9ea3e827-d3d5-481d-b8f6-90b20be97f2e/ovsdbserver-nb/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.450884 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.509492 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ac3dbbff-ef4c-461d-b2a0-58284b598cb4/openstack-network-exporter/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.515896 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n74mr\" (UniqueName: \"kubernetes.io/projected/979e1edc-d9fd-4508-abb8-19aea56540ca-kube-api-access-n74mr\") pod \"979e1edc-d9fd-4508-abb8-19aea56540ca\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.515942 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/979e1edc-d9fd-4508-abb8-19aea56540ca-host\") pod \"979e1edc-d9fd-4508-abb8-19aea56540ca\" (UID: \"979e1edc-d9fd-4508-abb8-19aea56540ca\") " Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.516060 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/979e1edc-d9fd-4508-abb8-19aea56540ca-host" (OuterVolumeSpecName: "host") pod "979e1edc-d9fd-4508-abb8-19aea56540ca" (UID: "979e1edc-d9fd-4508-abb8-19aea56540ca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.516598 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/979e1edc-d9fd-4508-abb8-19aea56540ca-host\") on node \"crc\" DevicePath \"\"" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.527868 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979e1edc-d9fd-4508-abb8-19aea56540ca-kube-api-access-n74mr" (OuterVolumeSpecName: "kube-api-access-n74mr") pod "979e1edc-d9fd-4508-abb8-19aea56540ca" (UID: "979e1edc-d9fd-4508-abb8-19aea56540ca"). InnerVolumeSpecName "kube-api-access-n74mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.618848 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n74mr\" (UniqueName: \"kubernetes.io/projected/979e1edc-d9fd-4508-abb8-19aea56540ca-kube-api-access-n74mr\") on node \"crc\" DevicePath \"\"" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.626452 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ac3dbbff-ef4c-461d-b2a0-58284b598cb4/ovsdbserver-sb/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.930652 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64cf955b6-w5x5t_ab2aac74-c03a-4d14-a332-ab84606c9864/placement-api/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.972002 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64cf955b6-w5x5t_ab2aac74-c03a-4d14-a332-ab84606c9864/placement-log/0.log" Oct 14 16:11:27 crc kubenswrapper[4860]: I1014 16:11:27.977387 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b636d89a-e295-48f6-8679-c6c7b0f998cf/setup-container/0.log" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.254656 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b636d89a-e295-48f6-8679-c6c7b0f998cf/rabbitmq/0.log" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.281789 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b636d89a-e295-48f6-8679-c6c7b0f998cf/setup-container/0.log" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.346924 4860 scope.go:117] "RemoveContainer" containerID="319892a136839aa189970621d3630471ff40e09bb4bb5ae2aea4599a80091904" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.347232 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/crc-debug-pw2g5" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.625613 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7bde387-0de9-44df-84cf-3db5e96019c9/setup-container/0.log" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.843648 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k_bf875e18-0a4b-4caf-85e0-fe7d96ace688/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.849501 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7bde387-0de9-44df-84cf-3db5e96019c9/setup-container/0.log" Oct 14 16:11:28 crc kubenswrapper[4860]: I1014 16:11:28.898218 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7bde387-0de9-44df-84cf-3db5e96019c9/rabbitmq/0.log" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.082278 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979e1edc-d9fd-4508-abb8-19aea56540ca" path="/var/lib/kubelet/pods/979e1edc-d9fd-4508-abb8-19aea56540ca/volumes" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.245992 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.246093 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.246145 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.246797 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.246852 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" gracePeriod=600 Oct 14 16:11:29 crc kubenswrapper[4860]: E1014 16:11:29.421595 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.644992 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4jpw4_9af1a0e5-8c28-4be6-8906-f60775a83304/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.648650 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-c77ns_a935dc27-6373-4538-8676-b2532a79575c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.649098 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk_442b40ad-4a75-4690-ab2a-a63194e46aac/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:29 crc kubenswrapper[4860]: I1014 16:11:29.886000 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-47tvx_b3b6bfde-9f16-4803-8b4c-2aba73c9612f/ssh-known-hosts-edpm-deployment/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.059692 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76b8bb94b7-r2cx7_b791e9e4-1b27-429a-9811-2b956a974e3a/proxy-httpd/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.162400 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76b8bb94b7-r2cx7_b791e9e4-1b27-429a-9811-2b956a974e3a/proxy-server/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.186743 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-b2bd2_4cc19e55-2664-49bd-8f7e-856d1c9b3ecd/swift-ring-rebalance/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.369126 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" exitCode=0 Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.369176 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce"} Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.369214 4860 scope.go:117] "RemoveContainer" containerID="2d4e0c136b36c1e0148ea536424775d6f7b84960fe87de84b2552ae4fd21ff48" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.369850 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:11:30 crc kubenswrapper[4860]: E1014 16:11:30.370177 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.463400 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-auditor/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.473871 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-replicator/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.484955 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-reaper/0.log" Oct 14 16:11:30 crc kubenswrapper[4860]: I1014 16:11:30.567311 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-server/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.032611 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-auditor/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.066142 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-replicator/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.069448 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-updater/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.076856 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-server/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.095057 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-auditor/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.309758 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-server/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.309993 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-expirer/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.310264 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-updater/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.319020 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-replicator/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.371475 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/rsync/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.609451 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/swift-recon-cron/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.655700 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw_567e371c-991d-4515-98bf-b17f6573a744/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.688403 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ccedbfab-f66d-49a5-baac-50c603e57c98/tempest-tests-tempest-tests-runner/0.log" Oct 14 16:11:31 crc kubenswrapper[4860]: I1014 16:11:31.939483 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b9379888-4451-403a-adb4-c9b17890351a/test-operator-logs-container/0.log" Oct 14 16:11:32 crc kubenswrapper[4860]: I1014 16:11:32.016646 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-22rlb_487e54e1-aee7-4e2c-abdd-903ea61b0b11/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:11:43 crc kubenswrapper[4860]: I1014 16:11:43.062014 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:11:43 crc kubenswrapper[4860]: E1014 16:11:43.062790 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:11:57 crc kubenswrapper[4860]: I1014 16:11:57.722325 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-lwpwz_95d281e4-c140-42c3-ba4e-3d36e98bb29c/kube-rbac-proxy/0.log" Oct 14 16:11:57 crc kubenswrapper[4860]: I1014 16:11:57.772578 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-lwpwz_95d281e4-c140-42c3-ba4e-3d36e98bb29c/manager/0.log" Oct 14 16:11:57 crc kubenswrapper[4860]: I1014 16:11:57.895660 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/util/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.063005 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:11:58 crc kubenswrapper[4860]: E1014 16:11:58.063301 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.173257 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/util/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.225380 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/pull/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.241569 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/pull/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.405102 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/pull/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.430462 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/util/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.447817 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/extract/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.608836 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-6lzwd_8680f35c-eae8-49e0-a670-d4b467a987f0/kube-rbac-proxy/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.750066 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-f2rfg_6b31fe2f-695e-4b8b-b632-7075e4a9740f/kube-rbac-proxy/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.760386 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-6lzwd_8680f35c-eae8-49e0-a670-d4b467a987f0/manager/0.log" Oct 14 16:11:58 crc kubenswrapper[4860]: I1014 16:11:58.863732 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-f2rfg_6b31fe2f-695e-4b8b-b632-7075e4a9740f/manager/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.331900 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-dgxfp_65912b78-7ceb-4bd0-ab72-70fd3574b786/kube-rbac-proxy/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.419142 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-8mngx_e24ba4ef-9297-4d61-a338-941ce00a2391/kube-rbac-proxy/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.473246 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-dgxfp_65912b78-7ceb-4bd0-ab72-70fd3574b786/manager/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.567501 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-8mngx_e24ba4ef-9297-4d61-a338-941ce00a2391/manager/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.708333 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-mdd5z_95d178d8-e3b2-4141-91af-b82fa61bd86a/kube-rbac-proxy/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.752810 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-mdd5z_95d178d8-e3b2-4141-91af-b82fa61bd86a/manager/0.log" Oct 14 16:11:59 crc kubenswrapper[4860]: I1014 16:11:59.888322 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-hpkm4_1b0c2826-792e-44ca-9bc1-830aefee72d6/kube-rbac-proxy/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.027496 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-l9w8x_e3456832-68ce-443e-825f-9d6af6cf829f/kube-rbac-proxy/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.042643 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-hpkm4_1b0c2826-792e-44ca-9bc1-830aefee72d6/manager/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.225269 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-l9w8x_e3456832-68ce-443e-825f-9d6af6cf829f/manager/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.270186 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8ht4q_4bbd7b36-79fe-423b-a5c6-2237390dea3f/kube-rbac-proxy/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.389118 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8ht4q_4bbd7b36-79fe-423b-a5c6-2237390dea3f/manager/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.470376 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-bc4x8_786a4f8b-062c-46b7-8028-5079481427db/kube-rbac-proxy/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.628331 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-bc4x8_786a4f8b-062c-46b7-8028-5079481427db/manager/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.713258 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-2rpj7_5397040a-47ac-487d-8e5a-8fd02d6ec654/kube-rbac-proxy/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.804507 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-2rpj7_5397040a-47ac-487d-8e5a-8fd02d6ec654/manager/0.log" Oct 14 16:12:00 crc kubenswrapper[4860]: I1014 16:12:00.888858 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-lfntw_1e6c58c7-4e05-4c8d-98f0-2063b1ba613f/kube-rbac-proxy/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.031339 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-tw4ph_1f864c3d-2e54-459b-b613-3785d0cf4ae6/kube-rbac-proxy/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.047580 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-lfntw_1e6c58c7-4e05-4c8d-98f0-2063b1ba613f/manager/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.125087 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-tw4ph_1f864c3d-2e54-459b-b613-3785d0cf4ae6/manager/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.237014 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-5l2qq_ad189aa9-4e21-4d7e-b1de-83497bd83376/kube-rbac-proxy/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.361107 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-5l2qq_ad189aa9-4e21-4d7e-b1de-83497bd83376/manager/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.378664 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt_df4d54ec-6345-4b47-8ae8-58ae0bf6da7f/kube-rbac-proxy/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.433941 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt_df4d54ec-6345-4b47-8ae8-58ae0bf6da7f/manager/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.627409 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768555b76-hzfmn_c584f96e-f636-458e-9aca-f953ccf4a900/kube-rbac-proxy/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.653058 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bc7d8f4c-hjwzs_88da4870-694b-46ba-9fda-5e85357bcb5e/kube-rbac-proxy/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.975649 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bc7d8f4c-hjwzs_88da4870-694b-46ba-9fda-5e85357bcb5e/operator/0.log" Oct 14 16:12:01 crc kubenswrapper[4860]: I1014 16:12:01.983708 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nv67r_24580674-ab2e-46af-b79a-e2396d8f61a5/registry-server/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.151243 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-4kql9_3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5/kube-rbac-proxy/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.327487 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-4kql9_3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5/manager/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.339108 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-l6rbl_d0ac64a4-cdc5-4362-9359-712291fafbdf/kube-rbac-proxy/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.458677 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-l6rbl_d0ac64a4-cdc5-4362-9359-712291fafbdf/manager/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.705060 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-gthfm_0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b/operator/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.776580 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-rpjh4_4450d3fe-e520-48c6-ac1d-25344bdedc5e/kube-rbac-proxy/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.849364 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768555b76-hzfmn_c584f96e-f636-458e-9aca-f953ccf4a900/manager/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.880613 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-rpjh4_4450d3fe-e520-48c6-ac1d-25344bdedc5e/manager/0.log" Oct 14 16:12:02 crc kubenswrapper[4860]: I1014 16:12:02.989837 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xpq8w_572e90ee-e3d4-44a0-b3c5-d0005f4cb41c/kube-rbac-proxy/0.log" Oct 14 16:12:03 crc kubenswrapper[4860]: I1014 16:12:03.072471 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xpq8w_572e90ee-e3d4-44a0-b3c5-d0005f4cb41c/manager/0.log" Oct 14 16:12:03 crc kubenswrapper[4860]: I1014 16:12:03.074609 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-9m7mm_f9603eeb-cc1b-4dc8-82e6-9cf64109c774/kube-rbac-proxy/0.log" Oct 14 16:12:03 crc kubenswrapper[4860]: I1014 16:12:03.122431 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-9m7mm_f9603eeb-cc1b-4dc8-82e6-9cf64109c774/manager/0.log" Oct 14 16:12:03 crc kubenswrapper[4860]: I1014 16:12:03.231360 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-lzb7d_9d1ea96c-cdba-4586-ae97-c008ff1ed05e/kube-rbac-proxy/0.log" Oct 14 16:12:03 crc kubenswrapper[4860]: I1014 16:12:03.260649 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-lzb7d_9d1ea96c-cdba-4586-ae97-c008ff1ed05e/manager/0.log" Oct 14 16:12:12 crc kubenswrapper[4860]: I1014 16:12:12.061579 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:12:12 crc kubenswrapper[4860]: E1014 16:12:12.062393 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:12:21 crc kubenswrapper[4860]: I1014 16:12:21.282676 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ql4q7_f5b47471-c477-482c-8462-62edd00df3bc/control-plane-machine-set-operator/0.log" Oct 14 16:12:21 crc kubenswrapper[4860]: I1014 16:12:21.421565 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jnwqb_8e925912-cc05-4c2b-8de7-ba05cd298123/kube-rbac-proxy/0.log" Oct 14 16:12:21 crc kubenswrapper[4860]: I1014 16:12:21.539922 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jnwqb_8e925912-cc05-4c2b-8de7-ba05cd298123/machine-api-operator/0.log" Oct 14 16:12:25 crc kubenswrapper[4860]: I1014 16:12:25.064840 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:12:25 crc kubenswrapper[4860]: E1014 16:12:25.065741 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:12:35 crc kubenswrapper[4860]: I1014 16:12:35.466736 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d96mc_ce7fd78e-7ed7-450e-bca7-ca9075b12a25/cert-manager-controller/0.log" Oct 14 16:12:35 crc kubenswrapper[4860]: I1014 16:12:35.537798 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-z626q_d1972274-e4e4-4910-b996-98f16f66de5e/cert-manager-cainjector/0.log" Oct 14 16:12:35 crc kubenswrapper[4860]: I1014 16:12:35.675450 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-xjn4j_bbaa104e-a070-4f3d-8807-959b551312b9/cert-manager-webhook/0.log" Oct 14 16:12:39 crc kubenswrapper[4860]: I1014 16:12:39.070520 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:12:39 crc kubenswrapper[4860]: E1014 16:12:39.072466 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:12:48 crc kubenswrapper[4860]: I1014 16:12:47.999665 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-stgb7_e5be091b-1de9-4a04-80b5-68ddf4fc73da/nmstate-console-plugin/0.log" Oct 14 16:12:48 crc kubenswrapper[4860]: I1014 16:12:48.118859 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wjlnp_3e969961-ebc3-4830-b52c-bbeb744ea07e/nmstate-handler/0.log" Oct 14 16:12:48 crc kubenswrapper[4860]: I1014 16:12:48.131402 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-t966d_9bef2f42-22a8-4cde-8267-c890543fe82e/kube-rbac-proxy/0.log" Oct 14 16:12:48 crc kubenswrapper[4860]: I1014 16:12:48.232670 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-t966d_9bef2f42-22a8-4cde-8267-c890543fe82e/nmstate-metrics/0.log" Oct 14 16:12:48 crc kubenswrapper[4860]: I1014 16:12:48.359256 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-zn5lz_b7e911b9-3fd1-49b4-8716-70507a0b2aa4/nmstate-operator/0.log" Oct 14 16:12:48 crc kubenswrapper[4860]: I1014 16:12:48.438685 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-c6bxt_c941e868-49fb-4e89-896a-50f0dbbfe71b/nmstate-webhook/0.log" Oct 14 16:12:53 crc kubenswrapper[4860]: I1014 16:12:53.062483 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:12:53 crc kubenswrapper[4860]: E1014 16:12:53.063253 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.029899 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-np5tb_76eff526-1e46-4804-a3ef-dfdc845038d7/kube-rbac-proxy/0.log" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.273077 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.317977 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-np5tb_76eff526-1e46-4804-a3ef-dfdc845038d7/controller/0.log" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.513872 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.573790 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.574946 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:13:04 crc kubenswrapper[4860]: I1014 16:13:04.584789 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.056690 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.122798 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.124744 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.127794 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.376679 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.402708 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.431594 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.484455 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/controller/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.680913 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/kube-rbac-proxy/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.685802 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/frr-metrics/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.738322 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/kube-rbac-proxy-frr/0.log" Oct 14 16:13:05 crc kubenswrapper[4860]: I1014 16:13:05.913686 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/reloader/0.log" Oct 14 16:13:06 crc kubenswrapper[4860]: I1014 16:13:06.087106 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-zvcbb_db44a95a-8142-4353-affc-7227a205135c/frr-k8s-webhook-server/0.log" Oct 14 16:13:06 crc kubenswrapper[4860]: I1014 16:13:06.378826 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8575fd6987-wq9q7_03cac4a6-319b-4df3-baf8-82868fa438e5/manager/0.log" Oct 14 16:13:06 crc kubenswrapper[4860]: I1014 16:13:06.543692 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75cf49597f-kjngh_19ebf47b-7556-421f-bc1a-442040a5995c/webhook-server/0.log" Oct 14 16:13:06 crc kubenswrapper[4860]: I1014 16:13:06.813682 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tbgl7_9ded0ce9-6baf-429a-b3ad-493b2bfda7de/kube-rbac-proxy/0.log" Oct 14 16:13:07 crc kubenswrapper[4860]: I1014 16:13:07.029404 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/frr/0.log" Oct 14 16:13:07 crc kubenswrapper[4860]: I1014 16:13:07.061324 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:13:07 crc kubenswrapper[4860]: E1014 16:13:07.061569 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:13:07 crc kubenswrapper[4860]: I1014 16:13:07.208212 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tbgl7_9ded0ce9-6baf-429a-b3ad-493b2bfda7de/speaker/0.log" Oct 14 16:13:20 crc kubenswrapper[4860]: I1014 16:13:20.062179 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:13:20 crc kubenswrapper[4860]: E1014 16:13:20.063108 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:13:20 crc kubenswrapper[4860]: I1014 16:13:20.781143 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/util/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.007022 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/pull/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.007305 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/pull/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.011449 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/util/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.166907 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/util/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.245278 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/extract/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.405577 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/pull/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.422140 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-utilities/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.599930 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-content/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.646013 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-content/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.652276 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-utilities/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.853296 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-utilities/0.log" Oct 14 16:13:21 crc kubenswrapper[4860]: I1014 16:13:21.869512 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-content/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.103801 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-utilities/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.204217 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/registry-server/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.312014 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-utilities/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.592380 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-content/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.605564 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-content/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.806428 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-content/0.log" Oct 14 16:13:22 crc kubenswrapper[4860]: I1014 16:13:22.851592 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-utilities/0.log" Oct 14 16:13:23 crc kubenswrapper[4860]: I1014 16:13:23.161481 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/util/0.log" Oct 14 16:13:23 crc kubenswrapper[4860]: I1014 16:13:23.210712 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/util/0.log" Oct 14 16:13:23 crc kubenswrapper[4860]: I1014 16:13:23.290483 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/pull/0.log" Oct 14 16:13:23 crc kubenswrapper[4860]: I1014 16:13:23.442394 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/pull/0.log" Oct 14 16:13:23 crc kubenswrapper[4860]: I1014 16:13:23.628253 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/pull/0.log" Oct 14 16:13:23 crc kubenswrapper[4860]: I1014 16:13:23.657997 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/util/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.007858 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/extract/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.535106 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-utilities/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.660724 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-utilities/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.679766 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-content/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.725965 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-content/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.780713 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4tjxk_4e88f73d-d331-4edf-903f-2930d09f8fd9/marketplace-operator/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.932615 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-utilities/0.log" Oct 14 16:13:24 crc kubenswrapper[4860]: I1014 16:13:24.961478 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-content/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.259604 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/registry-server/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.282655 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-utilities/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.487521 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-utilities/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.494811 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/registry-server/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.563366 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-content/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.600785 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-content/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.800445 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-utilities/0.log" Oct 14 16:13:25 crc kubenswrapper[4860]: I1014 16:13:25.816509 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-content/0.log" Oct 14 16:13:26 crc kubenswrapper[4860]: I1014 16:13:26.424261 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/registry-server/0.log" Oct 14 16:13:35 crc kubenswrapper[4860]: I1014 16:13:35.063111 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:13:35 crc kubenswrapper[4860]: E1014 16:13:35.064284 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:13:48 crc kubenswrapper[4860]: I1014 16:13:48.062459 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:13:48 crc kubenswrapper[4860]: E1014 16:13:48.063285 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:14:02 crc kubenswrapper[4860]: I1014 16:14:02.080402 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:14:02 crc kubenswrapper[4860]: E1014 16:14:02.084125 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:14:16 crc kubenswrapper[4860]: I1014 16:14:16.061762 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:14:16 crc kubenswrapper[4860]: E1014 16:14:16.062618 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:14:29 crc kubenswrapper[4860]: I1014 16:14:29.069980 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:14:29 crc kubenswrapper[4860]: E1014 16:14:29.070859 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:14:44 crc kubenswrapper[4860]: I1014 16:14:44.061643 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:14:44 crc kubenswrapper[4860]: E1014 16:14:44.062397 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:14:58 crc kubenswrapper[4860]: I1014 16:14:58.062346 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:14:58 crc kubenswrapper[4860]: E1014 16:14:58.063291 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.163102 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px"] Oct 14 16:15:00 crc kubenswrapper[4860]: E1014 16:15:00.163862 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979e1edc-d9fd-4508-abb8-19aea56540ca" containerName="container-00" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.163877 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="979e1edc-d9fd-4508-abb8-19aea56540ca" containerName="container-00" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.164096 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="979e1edc-d9fd-4508-abb8-19aea56540ca" containerName="container-00" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.166809 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.170463 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.170555 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.179256 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px"] Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.245514 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5aaab1-0a57-47ac-a681-d5f352f028bd-config-volume\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.245905 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db5aaab1-0a57-47ac-a681-d5f352f028bd-secret-volume\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.245987 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84mjn\" (UniqueName: \"kubernetes.io/projected/db5aaab1-0a57-47ac-a681-d5f352f028bd-kube-api-access-84mjn\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.348631 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db5aaab1-0a57-47ac-a681-d5f352f028bd-secret-volume\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.349416 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84mjn\" (UniqueName: \"kubernetes.io/projected/db5aaab1-0a57-47ac-a681-d5f352f028bd-kube-api-access-84mjn\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.349568 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5aaab1-0a57-47ac-a681-d5f352f028bd-config-volume\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.350823 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5aaab1-0a57-47ac-a681-d5f352f028bd-config-volume\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.355491 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db5aaab1-0a57-47ac-a681-d5f352f028bd-secret-volume\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.371321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84mjn\" (UniqueName: \"kubernetes.io/projected/db5aaab1-0a57-47ac-a681-d5f352f028bd-kube-api-access-84mjn\") pod \"collect-profiles-29340975-xr6px\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:00 crc kubenswrapper[4860]: I1014 16:15:00.492304 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:01 crc kubenswrapper[4860]: I1014 16:15:01.081429 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px"] Oct 14 16:15:01 crc kubenswrapper[4860]: I1014 16:15:01.416968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" event={"ID":"db5aaab1-0a57-47ac-a681-d5f352f028bd","Type":"ContainerStarted","Data":"d1f0e307c42fc61eff2a9a632e4fe502a45c4cb4339b3875ebeebe6f0182b5fd"} Oct 14 16:15:01 crc kubenswrapper[4860]: I1014 16:15:01.418341 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" event={"ID":"db5aaab1-0a57-47ac-a681-d5f352f028bd","Type":"ContainerStarted","Data":"288cfae35390f16c6e5565f99325d0e5f679e0729e03fb1d82e0c5851cd5435f"} Oct 14 16:15:02 crc kubenswrapper[4860]: I1014 16:15:02.430936 4860 generic.go:334] "Generic (PLEG): container finished" podID="db5aaab1-0a57-47ac-a681-d5f352f028bd" containerID="d1f0e307c42fc61eff2a9a632e4fe502a45c4cb4339b3875ebeebe6f0182b5fd" exitCode=0 Oct 14 16:15:02 crc kubenswrapper[4860]: I1014 16:15:02.431379 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" event={"ID":"db5aaab1-0a57-47ac-a681-d5f352f028bd","Type":"ContainerDied","Data":"d1f0e307c42fc61eff2a9a632e4fe502a45c4cb4339b3875ebeebe6f0182b5fd"} Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.797557 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.929142 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84mjn\" (UniqueName: \"kubernetes.io/projected/db5aaab1-0a57-47ac-a681-d5f352f028bd-kube-api-access-84mjn\") pod \"db5aaab1-0a57-47ac-a681-d5f352f028bd\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.929276 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5aaab1-0a57-47ac-a681-d5f352f028bd-config-volume\") pod \"db5aaab1-0a57-47ac-a681-d5f352f028bd\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.929351 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db5aaab1-0a57-47ac-a681-d5f352f028bd-secret-volume\") pod \"db5aaab1-0a57-47ac-a681-d5f352f028bd\" (UID: \"db5aaab1-0a57-47ac-a681-d5f352f028bd\") " Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.931820 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5aaab1-0a57-47ac-a681-d5f352f028bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "db5aaab1-0a57-47ac-a681-d5f352f028bd" (UID: "db5aaab1-0a57-47ac-a681-d5f352f028bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.936750 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5aaab1-0a57-47ac-a681-d5f352f028bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db5aaab1-0a57-47ac-a681-d5f352f028bd" (UID: "db5aaab1-0a57-47ac-a681-d5f352f028bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 16:15:03 crc kubenswrapper[4860]: I1014 16:15:03.937763 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5aaab1-0a57-47ac-a681-d5f352f028bd-kube-api-access-84mjn" (OuterVolumeSpecName: "kube-api-access-84mjn") pod "db5aaab1-0a57-47ac-a681-d5f352f028bd" (UID: "db5aaab1-0a57-47ac-a681-d5f352f028bd"). InnerVolumeSpecName "kube-api-access-84mjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.031295 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84mjn\" (UniqueName: \"kubernetes.io/projected/db5aaab1-0a57-47ac-a681-d5f352f028bd-kube-api-access-84mjn\") on node \"crc\" DevicePath \"\"" Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.031543 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5aaab1-0a57-47ac-a681-d5f352f028bd-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.031630 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db5aaab1-0a57-47ac-a681-d5f352f028bd-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.449062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" event={"ID":"db5aaab1-0a57-47ac-a681-d5f352f028bd","Type":"ContainerDied","Data":"288cfae35390f16c6e5565f99325d0e5f679e0729e03fb1d82e0c5851cd5435f"} Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.449103 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288cfae35390f16c6e5565f99325d0e5f679e0729e03fb1d82e0c5851cd5435f" Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.449152 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340975-xr6px" Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.526376 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t"] Oct 14 16:15:04 crc kubenswrapper[4860]: I1014 16:15:04.536222 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340930-7qf9t"] Oct 14 16:15:05 crc kubenswrapper[4860]: I1014 16:15:05.072740 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12e0b61-a53a-4f81-b2e2-8ae3efb42288" path="/var/lib/kubelet/pods/d12e0b61-a53a-4f81-b2e2-8ae3efb42288/volumes" Oct 14 16:15:12 crc kubenswrapper[4860]: I1014 16:15:12.061982 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:15:12 crc kubenswrapper[4860]: E1014 16:15:12.062858 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:15:18 crc kubenswrapper[4860]: I1014 16:15:18.898824 4860 scope.go:117] "RemoveContainer" containerID="d76e098bda4daace3fbab28c3cb9265ac2f276c5711911dbc1d4f6de7cf20a9c" Oct 14 16:15:26 crc kubenswrapper[4860]: I1014 16:15:26.062519 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:15:26 crc kubenswrapper[4860]: E1014 16:15:26.063538 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:15:41 crc kubenswrapper[4860]: I1014 16:15:41.066269 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:15:41 crc kubenswrapper[4860]: E1014 16:15:41.067072 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:15:49 crc kubenswrapper[4860]: I1014 16:15:49.896919 4860 generic.go:334] "Generic (PLEG): container finished" podID="b99aedca-914d-47ab-8261-a05253cc09df" containerID="a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c" exitCode=0 Oct 14 16:15:49 crc kubenswrapper[4860]: I1014 16:15:49.897014 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mnblv/must-gather-m8c5t" event={"ID":"b99aedca-914d-47ab-8261-a05253cc09df","Type":"ContainerDied","Data":"a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c"} Oct 14 16:15:49 crc kubenswrapper[4860]: I1014 16:15:49.898310 4860 scope.go:117] "RemoveContainer" containerID="a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c" Oct 14 16:15:50 crc kubenswrapper[4860]: I1014 16:15:50.922323 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mnblv_must-gather-m8c5t_b99aedca-914d-47ab-8261-a05253cc09df/gather/0.log" Oct 14 16:15:53 crc kubenswrapper[4860]: E1014 16:15:53.731954 4860 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:52624->38.102.83.179:42131: write tcp 38.102.83.179:52624->38.102.83.179:42131: write: broken pipe Oct 14 16:15:56 crc kubenswrapper[4860]: I1014 16:15:56.062505 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:15:56 crc kubenswrapper[4860]: E1014 16:15:56.063835 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.260721 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mnblv/must-gather-m8c5t"] Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.261522 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mnblv/must-gather-m8c5t" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="copy" containerID="cri-o://589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f" gracePeriod=2 Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.269624 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mnblv/must-gather-m8c5t"] Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.732933 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mnblv_must-gather-m8c5t_b99aedca-914d-47ab-8261-a05253cc09df/copy/0.log" Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.733755 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.784873 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7tb6\" (UniqueName: \"kubernetes.io/projected/b99aedca-914d-47ab-8261-a05253cc09df-kube-api-access-w7tb6\") pod \"b99aedca-914d-47ab-8261-a05253cc09df\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.784951 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b99aedca-914d-47ab-8261-a05253cc09df-must-gather-output\") pod \"b99aedca-914d-47ab-8261-a05253cc09df\" (UID: \"b99aedca-914d-47ab-8261-a05253cc09df\") " Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.798384 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99aedca-914d-47ab-8261-a05253cc09df-kube-api-access-w7tb6" (OuterVolumeSpecName: "kube-api-access-w7tb6") pod "b99aedca-914d-47ab-8261-a05253cc09df" (UID: "b99aedca-914d-47ab-8261-a05253cc09df"). InnerVolumeSpecName "kube-api-access-w7tb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.887477 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7tb6\" (UniqueName: \"kubernetes.io/projected/b99aedca-914d-47ab-8261-a05253cc09df-kube-api-access-w7tb6\") on node \"crc\" DevicePath \"\"" Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.975107 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b99aedca-914d-47ab-8261-a05253cc09df-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b99aedca-914d-47ab-8261-a05253cc09df" (UID: "b99aedca-914d-47ab-8261-a05253cc09df"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:15:59 crc kubenswrapper[4860]: I1014 16:15:59.989310 4860 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b99aedca-914d-47ab-8261-a05253cc09df-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.012423 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mnblv_must-gather-m8c5t_b99aedca-914d-47ab-8261-a05253cc09df/copy/0.log" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.014070 4860 generic.go:334] "Generic (PLEG): container finished" podID="b99aedca-914d-47ab-8261-a05253cc09df" containerID="589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f" exitCode=143 Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.014124 4860 scope.go:117] "RemoveContainer" containerID="589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.015894 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mnblv/must-gather-m8c5t" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.057677 4860 scope.go:117] "RemoveContainer" containerID="a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.108100 4860 scope.go:117] "RemoveContainer" containerID="589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f" Oct 14 16:16:00 crc kubenswrapper[4860]: E1014 16:16:00.108681 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f\": container with ID starting with 589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f not found: ID does not exist" containerID="589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.108745 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f"} err="failed to get container status \"589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f\": rpc error: code = NotFound desc = could not find container \"589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f\": container with ID starting with 589b2e06f960b9383c020194d8c175e2608ec9f31f52a4893b0ed1ed8938c40f not found: ID does not exist" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.108788 4860 scope.go:117] "RemoveContainer" containerID="a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c" Oct 14 16:16:00 crc kubenswrapper[4860]: E1014 16:16:00.109864 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c\": container with ID starting with a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c not found: ID does not exist" containerID="a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c" Oct 14 16:16:00 crc kubenswrapper[4860]: I1014 16:16:00.109926 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c"} err="failed to get container status \"a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c\": rpc error: code = NotFound desc = could not find container \"a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c\": container with ID starting with a82e33e4ae21ff76abb050c6587b500cc1c45fe5216ccb6284ded2b9a83a432c not found: ID does not exist" Oct 14 16:16:01 crc kubenswrapper[4860]: I1014 16:16:01.071910 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99aedca-914d-47ab-8261-a05253cc09df" path="/var/lib/kubelet/pods/b99aedca-914d-47ab-8261-a05253cc09df/volumes" Oct 14 16:16:10 crc kubenswrapper[4860]: I1014 16:16:10.062877 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:16:10 crc kubenswrapper[4860]: E1014 16:16:10.063696 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:16:25 crc kubenswrapper[4860]: I1014 16:16:25.062007 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:16:25 crc kubenswrapper[4860]: E1014 16:16:25.062885 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:16:40 crc kubenswrapper[4860]: I1014 16:16:40.061958 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:16:40 crc kubenswrapper[4860]: I1014 16:16:40.409817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"20e59eaa6be34b86827808ea8770025074b38ad785ce4c0eccd4f5e13bb7b741"} Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.493432 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r4bls/must-gather-l94kb"] Oct 14 16:16:41 crc kubenswrapper[4860]: E1014 16:16:41.494133 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="copy" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.494145 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="copy" Oct 14 16:16:41 crc kubenswrapper[4860]: E1014 16:16:41.494189 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="gather" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.494195 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="gather" Oct 14 16:16:41 crc kubenswrapper[4860]: E1014 16:16:41.494205 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5aaab1-0a57-47ac-a681-d5f352f028bd" containerName="collect-profiles" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.494212 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5aaab1-0a57-47ac-a681-d5f352f028bd" containerName="collect-profiles" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.494378 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="copy" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.494395 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5aaab1-0a57-47ac-a681-d5f352f028bd" containerName="collect-profiles" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.494408 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99aedca-914d-47ab-8261-a05253cc09df" containerName="gather" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.495343 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.498564 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-r4bls"/"openshift-service-ca.crt" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.498775 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-r4bls"/"kube-root-ca.crt" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.525368 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-r4bls/must-gather-l94kb"] Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.577822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1b547cb9-b2c8-444a-b8d3-77e668f953c3-must-gather-output\") pod \"must-gather-l94kb\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.577955 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk69f\" (UniqueName: \"kubernetes.io/projected/1b547cb9-b2c8-444a-b8d3-77e668f953c3-kube-api-access-tk69f\") pod \"must-gather-l94kb\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.679214 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk69f\" (UniqueName: \"kubernetes.io/projected/1b547cb9-b2c8-444a-b8d3-77e668f953c3-kube-api-access-tk69f\") pod \"must-gather-l94kb\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.679324 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1b547cb9-b2c8-444a-b8d3-77e668f953c3-must-gather-output\") pod \"must-gather-l94kb\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.679814 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1b547cb9-b2c8-444a-b8d3-77e668f953c3-must-gather-output\") pod \"must-gather-l94kb\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.723693 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk69f\" (UniqueName: \"kubernetes.io/projected/1b547cb9-b2c8-444a-b8d3-77e668f953c3-kube-api-access-tk69f\") pod \"must-gather-l94kb\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:41 crc kubenswrapper[4860]: I1014 16:16:41.815246 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:16:42 crc kubenswrapper[4860]: I1014 16:16:42.293695 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-r4bls/must-gather-l94kb"] Oct 14 16:16:42 crc kubenswrapper[4860]: W1014 16:16:42.301898 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b547cb9_b2c8_444a_b8d3_77e668f953c3.slice/crio-467b009c8cf6c5bbe9153c69fee0af1045a48549067c6143da4a38c045bb2545 WatchSource:0}: Error finding container 467b009c8cf6c5bbe9153c69fee0af1045a48549067c6143da4a38c045bb2545: Status 404 returned error can't find the container with id 467b009c8cf6c5bbe9153c69fee0af1045a48549067c6143da4a38c045bb2545 Oct 14 16:16:42 crc kubenswrapper[4860]: I1014 16:16:42.427524 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/must-gather-l94kb" event={"ID":"1b547cb9-b2c8-444a-b8d3-77e668f953c3","Type":"ContainerStarted","Data":"467b009c8cf6c5bbe9153c69fee0af1045a48549067c6143da4a38c045bb2545"} Oct 14 16:16:43 crc kubenswrapper[4860]: I1014 16:16:43.448945 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/must-gather-l94kb" event={"ID":"1b547cb9-b2c8-444a-b8d3-77e668f953c3","Type":"ContainerStarted","Data":"660b073c3b82eeaea8febd92751199953791af2aaea7dfb6ab04b161f96f865d"} Oct 14 16:16:43 crc kubenswrapper[4860]: I1014 16:16:43.449527 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/must-gather-l94kb" event={"ID":"1b547cb9-b2c8-444a-b8d3-77e668f953c3","Type":"ContainerStarted","Data":"da93f14a58969d91833ef089cca1896c204c5c1c24f47d39e877ab47755146a3"} Oct 14 16:16:43 crc kubenswrapper[4860]: I1014 16:16:43.467093 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-r4bls/must-gather-l94kb" podStartSLOduration=2.467070265 podStartE2EDuration="2.467070265s" podCreationTimestamp="2025-10-14 16:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 16:16:43.464904702 +0000 UTC m=+5265.051688161" watchObservedRunningTime="2025-10-14 16:16:43.467070265 +0000 UTC m=+5265.053853734" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.595584 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r4bls/crc-debug-z2lfp"] Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.597513 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.599578 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-r4bls"/"default-dockercfg-rgmxx" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.787271 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-host\") pod \"crc-debug-z2lfp\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.787373 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8dp\" (UniqueName: \"kubernetes.io/projected/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-kube-api-access-8z8dp\") pod \"crc-debug-z2lfp\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.889408 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-host\") pod \"crc-debug-z2lfp\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.889480 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8dp\" (UniqueName: \"kubernetes.io/projected/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-kube-api-access-8z8dp\") pod \"crc-debug-z2lfp\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.889525 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-host\") pod \"crc-debug-z2lfp\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.914848 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8dp\" (UniqueName: \"kubernetes.io/projected/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-kube-api-access-8z8dp\") pod \"crc-debug-z2lfp\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: I1014 16:16:46.930258 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:16:46 crc kubenswrapper[4860]: W1014 16:16:46.963192 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf10fa41c_e3bb_4f73_8b42_9ec815d30ed7.slice/crio-95e62ced2f1dce6d5721e997428ce154a477a138925e8cb4004d3630e738156a WatchSource:0}: Error finding container 95e62ced2f1dce6d5721e997428ce154a477a138925e8cb4004d3630e738156a: Status 404 returned error can't find the container with id 95e62ced2f1dce6d5721e997428ce154a477a138925e8cb4004d3630e738156a Oct 14 16:16:47 crc kubenswrapper[4860]: I1014 16:16:47.483874 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" event={"ID":"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7","Type":"ContainerStarted","Data":"1e7d4db45f3530897644a1c1db41aff3a181a07e2cada8b0eef0779c0c674825"} Oct 14 16:16:47 crc kubenswrapper[4860]: I1014 16:16:47.484589 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" event={"ID":"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7","Type":"ContainerStarted","Data":"95e62ced2f1dce6d5721e997428ce154a477a138925e8cb4004d3630e738156a"} Oct 14 16:16:47 crc kubenswrapper[4860]: I1014 16:16:47.503650 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" podStartSLOduration=1.503628182 podStartE2EDuration="1.503628182s" podCreationTimestamp="2025-10-14 16:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 16:16:47.496785886 +0000 UTC m=+5269.083569335" watchObservedRunningTime="2025-10-14 16:16:47.503628182 +0000 UTC m=+5269.090411631" Oct 14 16:17:19 crc kubenswrapper[4860]: I1014 16:17:19.045778 4860 scope.go:117] "RemoveContainer" containerID="bfe607d8cf4464fef7433c73604268a819ca87d83899a6e572446ba0222603bd" Oct 14 16:17:33 crc kubenswrapper[4860]: I1014 16:17:33.916817 4860 generic.go:334] "Generic (PLEG): container finished" podID="f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" containerID="1e7d4db45f3530897644a1c1db41aff3a181a07e2cada8b0eef0779c0c674825" exitCode=0 Oct 14 16:17:33 crc kubenswrapper[4860]: I1014 16:17:33.916914 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" event={"ID":"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7","Type":"ContainerDied","Data":"1e7d4db45f3530897644a1c1db41aff3a181a07e2cada8b0eef0779c0c674825"} Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.033351 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.074522 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r4bls/crc-debug-z2lfp"] Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.091587 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r4bls/crc-debug-z2lfp"] Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.183004 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-host\") pod \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.183109 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-host" (OuterVolumeSpecName: "host") pod "f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" (UID: "f10fa41c-e3bb-4f73-8b42-9ec815d30ed7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.183289 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8dp\" (UniqueName: \"kubernetes.io/projected/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-kube-api-access-8z8dp\") pod \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\" (UID: \"f10fa41c-e3bb-4f73-8b42-9ec815d30ed7\") " Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.183818 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-host\") on node \"crc\" DevicePath \"\"" Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.195566 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-kube-api-access-8z8dp" (OuterVolumeSpecName: "kube-api-access-8z8dp") pod "f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" (UID: "f10fa41c-e3bb-4f73-8b42-9ec815d30ed7"). InnerVolumeSpecName "kube-api-access-8z8dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.285362 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z8dp\" (UniqueName: \"kubernetes.io/projected/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7-kube-api-access-8z8dp\") on node \"crc\" DevicePath \"\"" Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.935470 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e62ced2f1dce6d5721e997428ce154a477a138925e8cb4004d3630e738156a" Oct 14 16:17:35 crc kubenswrapper[4860]: I1014 16:17:35.935558 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-z2lfp" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.306544 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r4bls/crc-debug-7tnth"] Oct 14 16:17:36 crc kubenswrapper[4860]: E1014 16:17:36.306894 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" containerName="container-00" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.306905 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" containerName="container-00" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.307146 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" containerName="container-00" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.307714 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.310054 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-r4bls"/"default-dockercfg-rgmxx" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.405541 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzz8\" (UniqueName: \"kubernetes.io/projected/6e6cee29-a032-4161-8958-0fa3f524d4ef-kube-api-access-vwzz8\") pod \"crc-debug-7tnth\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.405687 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6cee29-a032-4161-8958-0fa3f524d4ef-host\") pod \"crc-debug-7tnth\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.507393 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwzz8\" (UniqueName: \"kubernetes.io/projected/6e6cee29-a032-4161-8958-0fa3f524d4ef-kube-api-access-vwzz8\") pod \"crc-debug-7tnth\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.507555 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6cee29-a032-4161-8958-0fa3f524d4ef-host\") pod \"crc-debug-7tnth\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.507768 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6cee29-a032-4161-8958-0fa3f524d4ef-host\") pod \"crc-debug-7tnth\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.525721 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwzz8\" (UniqueName: \"kubernetes.io/projected/6e6cee29-a032-4161-8958-0fa3f524d4ef-kube-api-access-vwzz8\") pod \"crc-debug-7tnth\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.625021 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.943809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-7tnth" event={"ID":"6e6cee29-a032-4161-8958-0fa3f524d4ef","Type":"ContainerStarted","Data":"5ade317291d80b57b052051b54a4d1e0fd4d7ecd8d946831bccc0efe75b0763b"} Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.944095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-7tnth" event={"ID":"6e6cee29-a032-4161-8958-0fa3f524d4ef","Type":"ContainerStarted","Data":"305792a8a81bfd42954f031cc6592460a529a4a75d9fb94b93843501211bc4db"} Oct 14 16:17:36 crc kubenswrapper[4860]: I1014 16:17:36.960588 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-r4bls/crc-debug-7tnth" podStartSLOduration=0.960571608 podStartE2EDuration="960.571608ms" podCreationTimestamp="2025-10-14 16:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 16:17:36.954647415 +0000 UTC m=+5318.541430874" watchObservedRunningTime="2025-10-14 16:17:36.960571608 +0000 UTC m=+5318.547355057" Oct 14 16:17:37 crc kubenswrapper[4860]: I1014 16:17:37.072062 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10fa41c-e3bb-4f73-8b42-9ec815d30ed7" path="/var/lib/kubelet/pods/f10fa41c-e3bb-4f73-8b42-9ec815d30ed7/volumes" Oct 14 16:17:37 crc kubenswrapper[4860]: I1014 16:17:37.957324 4860 generic.go:334] "Generic (PLEG): container finished" podID="6e6cee29-a032-4161-8958-0fa3f524d4ef" containerID="5ade317291d80b57b052051b54a4d1e0fd4d7ecd8d946831bccc0efe75b0763b" exitCode=0 Oct 14 16:17:37 crc kubenswrapper[4860]: I1014 16:17:37.958630 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-7tnth" event={"ID":"6e6cee29-a032-4161-8958-0fa3f524d4ef","Type":"ContainerDied","Data":"5ade317291d80b57b052051b54a4d1e0fd4d7ecd8d946831bccc0efe75b0763b"} Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.058590 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.133900 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r4bls/crc-debug-7tnth"] Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.140960 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r4bls/crc-debug-7tnth"] Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.153780 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwzz8\" (UniqueName: \"kubernetes.io/projected/6e6cee29-a032-4161-8958-0fa3f524d4ef-kube-api-access-vwzz8\") pod \"6e6cee29-a032-4161-8958-0fa3f524d4ef\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.153953 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6cee29-a032-4161-8958-0fa3f524d4ef-host" (OuterVolumeSpecName: "host") pod "6e6cee29-a032-4161-8958-0fa3f524d4ef" (UID: "6e6cee29-a032-4161-8958-0fa3f524d4ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.153811 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6cee29-a032-4161-8958-0fa3f524d4ef-host\") pod \"6e6cee29-a032-4161-8958-0fa3f524d4ef\" (UID: \"6e6cee29-a032-4161-8958-0fa3f524d4ef\") " Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.155189 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6cee29-a032-4161-8958-0fa3f524d4ef-host\") on node \"crc\" DevicePath \"\"" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.161166 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6cee29-a032-4161-8958-0fa3f524d4ef-kube-api-access-vwzz8" (OuterVolumeSpecName: "kube-api-access-vwzz8") pod "6e6cee29-a032-4161-8958-0fa3f524d4ef" (UID: "6e6cee29-a032-4161-8958-0fa3f524d4ef"). InnerVolumeSpecName "kube-api-access-vwzz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.257326 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwzz8\" (UniqueName: \"kubernetes.io/projected/6e6cee29-a032-4161-8958-0fa3f524d4ef-kube-api-access-vwzz8\") on node \"crc\" DevicePath \"\"" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.638305 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l94st"] Oct 14 16:17:39 crc kubenswrapper[4860]: E1014 16:17:39.639368 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6cee29-a032-4161-8958-0fa3f524d4ef" containerName="container-00" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.639458 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6cee29-a032-4161-8958-0fa3f524d4ef" containerName="container-00" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.639676 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6cee29-a032-4161-8958-0fa3f524d4ef" containerName="container-00" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.641008 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.667291 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l94st"] Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.765764 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8572m\" (UniqueName: \"kubernetes.io/projected/734ecd8c-4824-4547-80b9-c673af75cab0-kube-api-access-8572m\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.765878 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-catalog-content\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.765965 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-utilities\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.867925 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-utilities\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.868064 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8572m\" (UniqueName: \"kubernetes.io/projected/734ecd8c-4824-4547-80b9-c673af75cab0-kube-api-access-8572m\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.868145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-catalog-content\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.868715 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-catalog-content\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.868990 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-utilities\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.886569 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8572m\" (UniqueName: \"kubernetes.io/projected/734ecd8c-4824-4547-80b9-c673af75cab0-kube-api-access-8572m\") pod \"community-operators-l94st\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.963595 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.981551 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305792a8a81bfd42954f031cc6592460a529a4a75d9fb94b93843501211bc4db" Oct 14 16:17:39 crc kubenswrapper[4860]: I1014 16:17:39.981616 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-7tnth" Oct 14 16:17:40 crc kubenswrapper[4860]: W1014 16:17:40.546758 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod734ecd8c_4824_4547_80b9_c673af75cab0.slice/crio-7b124ac9e8b3d88aa21e6a0ebf7604287415a383f813f4dbd2e6a578e9d6e7ac WatchSource:0}: Error finding container 7b124ac9e8b3d88aa21e6a0ebf7604287415a383f813f4dbd2e6a578e9d6e7ac: Status 404 returned error can't find the container with id 7b124ac9e8b3d88aa21e6a0ebf7604287415a383f813f4dbd2e6a578e9d6e7ac Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.571238 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l94st"] Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.639399 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r4bls/crc-debug-9xmrw"] Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.640636 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.643084 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-r4bls"/"default-dockercfg-rgmxx" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.788967 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nw5f\" (UniqueName: \"kubernetes.io/projected/6cc4c15a-a471-4e10-b22c-d06788d05793-kube-api-access-5nw5f\") pod \"crc-debug-9xmrw\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.789325 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6cc4c15a-a471-4e10-b22c-d06788d05793-host\") pod \"crc-debug-9xmrw\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.891556 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6cc4c15a-a471-4e10-b22c-d06788d05793-host\") pod \"crc-debug-9xmrw\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.891654 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6cc4c15a-a471-4e10-b22c-d06788d05793-host\") pod \"crc-debug-9xmrw\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.891843 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nw5f\" (UniqueName: \"kubernetes.io/projected/6cc4c15a-a471-4e10-b22c-d06788d05793-kube-api-access-5nw5f\") pod \"crc-debug-9xmrw\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.919497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nw5f\" (UniqueName: \"kubernetes.io/projected/6cc4c15a-a471-4e10-b22c-d06788d05793-kube-api-access-5nw5f\") pod \"crc-debug-9xmrw\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.991119 4860 generic.go:334] "Generic (PLEG): container finished" podID="734ecd8c-4824-4547-80b9-c673af75cab0" containerID="60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e" exitCode=0 Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.991157 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerDied","Data":"60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e"} Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.991180 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerStarted","Data":"7b124ac9e8b3d88aa21e6a0ebf7604287415a383f813f4dbd2e6a578e9d6e7ac"} Oct 14 16:17:40 crc kubenswrapper[4860]: I1014 16:17:40.993568 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 16:17:41 crc kubenswrapper[4860]: I1014 16:17:41.025251 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:41 crc kubenswrapper[4860]: W1014 16:17:41.070201 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc4c15a_a471_4e10_b22c_d06788d05793.slice/crio-480c84ce405afd0babc14f28fdb1104ee6563361f692829fa026dc423a124775 WatchSource:0}: Error finding container 480c84ce405afd0babc14f28fdb1104ee6563361f692829fa026dc423a124775: Status 404 returned error can't find the container with id 480c84ce405afd0babc14f28fdb1104ee6563361f692829fa026dc423a124775 Oct 14 16:17:41 crc kubenswrapper[4860]: I1014 16:17:41.073306 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6cee29-a032-4161-8958-0fa3f524d4ef" path="/var/lib/kubelet/pods/6e6cee29-a032-4161-8958-0fa3f524d4ef/volumes" Oct 14 16:17:42 crc kubenswrapper[4860]: I1014 16:17:42.006294 4860 generic.go:334] "Generic (PLEG): container finished" podID="6cc4c15a-a471-4e10-b22c-d06788d05793" containerID="4fede1ab13fd059292881cc1ec5d982153e2c910af8c7938a0fa01c47bc32d2d" exitCode=0 Oct 14 16:17:42 crc kubenswrapper[4860]: I1014 16:17:42.006701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-9xmrw" event={"ID":"6cc4c15a-a471-4e10-b22c-d06788d05793","Type":"ContainerDied","Data":"4fede1ab13fd059292881cc1ec5d982153e2c910af8c7938a0fa01c47bc32d2d"} Oct 14 16:17:42 crc kubenswrapper[4860]: I1014 16:17:42.006728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/crc-debug-9xmrw" event={"ID":"6cc4c15a-a471-4e10-b22c-d06788d05793","Type":"ContainerStarted","Data":"480c84ce405afd0babc14f28fdb1104ee6563361f692829fa026dc423a124775"} Oct 14 16:17:42 crc kubenswrapper[4860]: I1014 16:17:42.071271 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r4bls/crc-debug-9xmrw"] Oct 14 16:17:42 crc kubenswrapper[4860]: I1014 16:17:42.088521 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r4bls/crc-debug-9xmrw"] Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.119289 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.240770 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nw5f\" (UniqueName: \"kubernetes.io/projected/6cc4c15a-a471-4e10-b22c-d06788d05793-kube-api-access-5nw5f\") pod \"6cc4c15a-a471-4e10-b22c-d06788d05793\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.240960 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6cc4c15a-a471-4e10-b22c-d06788d05793-host\") pod \"6cc4c15a-a471-4e10-b22c-d06788d05793\" (UID: \"6cc4c15a-a471-4e10-b22c-d06788d05793\") " Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.241146 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cc4c15a-a471-4e10-b22c-d06788d05793-host" (OuterVolumeSpecName: "host") pod "6cc4c15a-a471-4e10-b22c-d06788d05793" (UID: "6cc4c15a-a471-4e10-b22c-d06788d05793"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.241548 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6cc4c15a-a471-4e10-b22c-d06788d05793-host\") on node \"crc\" DevicePath \"\"" Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.251454 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc4c15a-a471-4e10-b22c-d06788d05793-kube-api-access-5nw5f" (OuterVolumeSpecName: "kube-api-access-5nw5f") pod "6cc4c15a-a471-4e10-b22c-d06788d05793" (UID: "6cc4c15a-a471-4e10-b22c-d06788d05793"). InnerVolumeSpecName "kube-api-access-5nw5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:17:43 crc kubenswrapper[4860]: I1014 16:17:43.343231 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nw5f\" (UniqueName: \"kubernetes.io/projected/6cc4c15a-a471-4e10-b22c-d06788d05793-kube-api-access-5nw5f\") on node \"crc\" DevicePath \"\"" Oct 14 16:17:44 crc kubenswrapper[4860]: I1014 16:17:44.021456 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/crc-debug-9xmrw" Oct 14 16:17:44 crc kubenswrapper[4860]: I1014 16:17:44.021477 4860 scope.go:117] "RemoveContainer" containerID="4fede1ab13fd059292881cc1ec5d982153e2c910af8c7938a0fa01c47bc32d2d" Oct 14 16:17:44 crc kubenswrapper[4860]: I1014 16:17:44.023831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerStarted","Data":"535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df"} Oct 14 16:17:45 crc kubenswrapper[4860]: I1014 16:17:45.071715 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc4c15a-a471-4e10-b22c-d06788d05793" path="/var/lib/kubelet/pods/6cc4c15a-a471-4e10-b22c-d06788d05793/volumes" Oct 14 16:17:48 crc kubenswrapper[4860]: I1014 16:17:48.064977 4860 generic.go:334] "Generic (PLEG): container finished" podID="734ecd8c-4824-4547-80b9-c673af75cab0" containerID="535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df" exitCode=0 Oct 14 16:17:48 crc kubenswrapper[4860]: I1014 16:17:48.065052 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerDied","Data":"535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df"} Oct 14 16:17:50 crc kubenswrapper[4860]: I1014 16:17:50.091301 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerStarted","Data":"ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1"} Oct 14 16:17:50 crc kubenswrapper[4860]: I1014 16:17:50.115460 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l94st" podStartSLOduration=2.6052810600000003 podStartE2EDuration="11.115440965s" podCreationTimestamp="2025-10-14 16:17:39 +0000 UTC" firstStartedPulling="2025-10-14 16:17:40.993349765 +0000 UTC m=+5322.580133214" lastFinishedPulling="2025-10-14 16:17:49.50350966 +0000 UTC m=+5331.090293119" observedRunningTime="2025-10-14 16:17:50.108340023 +0000 UTC m=+5331.695123492" watchObservedRunningTime="2025-10-14 16:17:50.115440965 +0000 UTC m=+5331.702224414" Oct 14 16:17:59 crc kubenswrapper[4860]: I1014 16:17:59.964641 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:17:59 crc kubenswrapper[4860]: I1014 16:17:59.965085 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:18:00 crc kubenswrapper[4860]: I1014 16:18:00.018235 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:18:00 crc kubenswrapper[4860]: I1014 16:18:00.225375 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:18:00 crc kubenswrapper[4860]: I1014 16:18:00.272213 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l94st"] Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.193119 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l94st" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="registry-server" containerID="cri-o://ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1" gracePeriod=2 Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.690102 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.792364 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-utilities\") pod \"734ecd8c-4824-4547-80b9-c673af75cab0\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.792635 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-catalog-content\") pod \"734ecd8c-4824-4547-80b9-c673af75cab0\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.792702 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8572m\" (UniqueName: \"kubernetes.io/projected/734ecd8c-4824-4547-80b9-c673af75cab0-kube-api-access-8572m\") pod \"734ecd8c-4824-4547-80b9-c673af75cab0\" (UID: \"734ecd8c-4824-4547-80b9-c673af75cab0\") " Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.792995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-utilities" (OuterVolumeSpecName: "utilities") pod "734ecd8c-4824-4547-80b9-c673af75cab0" (UID: "734ecd8c-4824-4547-80b9-c673af75cab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.793308 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.799740 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734ecd8c-4824-4547-80b9-c673af75cab0-kube-api-access-8572m" (OuterVolumeSpecName: "kube-api-access-8572m") pod "734ecd8c-4824-4547-80b9-c673af75cab0" (UID: "734ecd8c-4824-4547-80b9-c673af75cab0"). InnerVolumeSpecName "kube-api-access-8572m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.839375 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "734ecd8c-4824-4547-80b9-c673af75cab0" (UID: "734ecd8c-4824-4547-80b9-c673af75cab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.894610 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734ecd8c-4824-4547-80b9-c673af75cab0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:02 crc kubenswrapper[4860]: I1014 16:18:02.894642 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8572m\" (UniqueName: \"kubernetes.io/projected/734ecd8c-4824-4547-80b9-c673af75cab0-kube-api-access-8572m\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.204077 4860 generic.go:334] "Generic (PLEG): container finished" podID="734ecd8c-4824-4547-80b9-c673af75cab0" containerID="ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1" exitCode=0 Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.204128 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l94st" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.204126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerDied","Data":"ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1"} Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.204922 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l94st" event={"ID":"734ecd8c-4824-4547-80b9-c673af75cab0","Type":"ContainerDied","Data":"7b124ac9e8b3d88aa21e6a0ebf7604287415a383f813f4dbd2e6a578e9d6e7ac"} Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.204957 4860 scope.go:117] "RemoveContainer" containerID="ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.229978 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l94st"] Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.233895 4860 scope.go:117] "RemoveContainer" containerID="535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.242563 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l94st"] Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.267233 4860 scope.go:117] "RemoveContainer" containerID="60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.306997 4860 scope.go:117] "RemoveContainer" containerID="ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1" Oct 14 16:18:03 crc kubenswrapper[4860]: E1014 16:18:03.307426 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1\": container with ID starting with ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1 not found: ID does not exist" containerID="ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.307586 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1"} err="failed to get container status \"ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1\": rpc error: code = NotFound desc = could not find container \"ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1\": container with ID starting with ee936e3e861bb573758ba93d84296ea868def5d8c69ab88833a1908fc8bb19a1 not found: ID does not exist" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.307691 4860 scope.go:117] "RemoveContainer" containerID="535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df" Oct 14 16:18:03 crc kubenswrapper[4860]: E1014 16:18:03.308099 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df\": container with ID starting with 535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df not found: ID does not exist" containerID="535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.308135 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df"} err="failed to get container status \"535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df\": rpc error: code = NotFound desc = could not find container \"535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df\": container with ID starting with 535a5e607162a8a9ac9e8ca96a9ed5bfddbf694d4266a36d3023ce856fe8a4df not found: ID does not exist" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.308157 4860 scope.go:117] "RemoveContainer" containerID="60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e" Oct 14 16:18:03 crc kubenswrapper[4860]: E1014 16:18:03.308450 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e\": container with ID starting with 60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e not found: ID does not exist" containerID="60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e" Oct 14 16:18:03 crc kubenswrapper[4860]: I1014 16:18:03.308473 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e"} err="failed to get container status \"60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e\": rpc error: code = NotFound desc = could not find container \"60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e\": container with ID starting with 60a8807ff671354654d335f5ada62585c2114be7e925ca7d9bf91479c2c9300e not found: ID does not exist" Oct 14 16:18:05 crc kubenswrapper[4860]: I1014 16:18:05.076839 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" path="/var/lib/kubelet/pods/734ecd8c-4824-4547-80b9-c673af75cab0/volumes" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.867922 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tc2tp"] Oct 14 16:18:06 crc kubenswrapper[4860]: E1014 16:18:06.868639 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="extract-utilities" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.868652 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="extract-utilities" Oct 14 16:18:06 crc kubenswrapper[4860]: E1014 16:18:06.868671 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc4c15a-a471-4e10-b22c-d06788d05793" containerName="container-00" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.868677 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc4c15a-a471-4e10-b22c-d06788d05793" containerName="container-00" Oct 14 16:18:06 crc kubenswrapper[4860]: E1014 16:18:06.868688 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="registry-server" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.868694 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="registry-server" Oct 14 16:18:06 crc kubenswrapper[4860]: E1014 16:18:06.868724 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="extract-content" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.868729 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="extract-content" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.868916 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc4c15a-a471-4e10-b22c-d06788d05793" containerName="container-00" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.868938 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="734ecd8c-4824-4547-80b9-c673af75cab0" containerName="registry-server" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.874418 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.909084 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc2tp"] Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.980622 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-catalog-content\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.981088 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrxh\" (UniqueName: \"kubernetes.io/projected/d70d6c5e-1949-453c-9a01-1434441de454-kube-api-access-rmrxh\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:06 crc kubenswrapper[4860]: I1014 16:18:06.981261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-utilities\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.083256 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-catalog-content\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.083466 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrxh\" (UniqueName: \"kubernetes.io/projected/d70d6c5e-1949-453c-9a01-1434441de454-kube-api-access-rmrxh\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.083546 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-utilities\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.083833 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-catalog-content\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.084223 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-utilities\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.463910 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nrzb8"] Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.465740 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.496434 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrzb8"] Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.591361 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-catalog-content\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.591409 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-utilities\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.591498 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pftpj\" (UniqueName: \"kubernetes.io/projected/5d29a84e-87a8-4728-ad34-2dd3655a8d33-kube-api-access-pftpj\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.632173 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrxh\" (UniqueName: \"kubernetes.io/projected/d70d6c5e-1949-453c-9a01-1434441de454-kube-api-access-rmrxh\") pod \"redhat-marketplace-tc2tp\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.693356 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pftpj\" (UniqueName: \"kubernetes.io/projected/5d29a84e-87a8-4728-ad34-2dd3655a8d33-kube-api-access-pftpj\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.693507 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-catalog-content\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.693549 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-utilities\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.693966 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-catalog-content\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.752186 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-utilities\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.775012 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pftpj\" (UniqueName: \"kubernetes.io/projected/5d29a84e-87a8-4728-ad34-2dd3655a8d33-kube-api-access-pftpj\") pod \"redhat-operators-nrzb8\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.792176 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:07 crc kubenswrapper[4860]: I1014 16:18:07.792891 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:08 crc kubenswrapper[4860]: I1014 16:18:08.897491 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrzb8"] Oct 14 16:18:08 crc kubenswrapper[4860]: I1014 16:18:08.964596 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc2tp"] Oct 14 16:18:08 crc kubenswrapper[4860]: W1014 16:18:08.965952 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd70d6c5e_1949_453c_9a01_1434441de454.slice/crio-4bdfc5137211ed2b595e514455bdad1751bdbb2d6099b80308c965cf3103f668 WatchSource:0}: Error finding container 4bdfc5137211ed2b595e514455bdad1751bdbb2d6099b80308c965cf3103f668: Status 404 returned error can't find the container with id 4bdfc5137211ed2b595e514455bdad1751bdbb2d6099b80308c965cf3103f668 Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.291360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerStarted","Data":"4bdfc5137211ed2b595e514455bdad1751bdbb2d6099b80308c965cf3103f668"} Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.292297 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerStarted","Data":"9da282a2435533988edb6b205bf5bedadfcd3301bde7ea6e25dcf29c8e30183e"} Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.879944 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rzg2h"] Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.883839 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.894705 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rzg2h"] Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.946464 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b945p\" (UniqueName: \"kubernetes.io/projected/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-kube-api-access-b945p\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.946886 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-utilities\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:09 crc kubenswrapper[4860]: I1014 16:18:09.946936 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-catalog-content\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.049400 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b945p\" (UniqueName: \"kubernetes.io/projected/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-kube-api-access-b945p\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.049641 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-utilities\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.049686 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-catalog-content\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.050283 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-utilities\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.050328 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-catalog-content\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.065806 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b945p\" (UniqueName: \"kubernetes.io/projected/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-kube-api-access-b945p\") pod \"certified-operators-rzg2h\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.205577 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:10 crc kubenswrapper[4860]: I1014 16:18:10.737178 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rzg2h"] Oct 14 16:18:11 crc kubenswrapper[4860]: I1014 16:18:11.313099 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerStarted","Data":"2a29fcf20ba0bb54aaf7254faf5e067a2fa3cfad75b30315bacf89a87613a5a6"} Oct 14 16:18:15 crc kubenswrapper[4860]: I1014 16:18:15.395801 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerStarted","Data":"5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0"} Oct 14 16:18:15 crc kubenswrapper[4860]: I1014 16:18:15.400886 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerStarted","Data":"845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8"} Oct 14 16:18:16 crc kubenswrapper[4860]: I1014 16:18:16.413957 4860 generic.go:334] "Generic (PLEG): container finished" podID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" containerID="845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8" exitCode=0 Oct 14 16:18:16 crc kubenswrapper[4860]: I1014 16:18:16.414043 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerDied","Data":"845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8"} Oct 14 16:18:16 crc kubenswrapper[4860]: I1014 16:18:16.416348 4860 generic.go:334] "Generic (PLEG): container finished" podID="fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" containerID="ceb00f0d00f615ed2e5c00f6f4f4e74ababfd17e4dc2a10418bb740081d693f6" exitCode=0 Oct 14 16:18:16 crc kubenswrapper[4860]: I1014 16:18:16.416445 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerDied","Data":"ceb00f0d00f615ed2e5c00f6f4f4e74ababfd17e4dc2a10418bb740081d693f6"} Oct 14 16:18:16 crc kubenswrapper[4860]: I1014 16:18:16.420133 4860 generic.go:334] "Generic (PLEG): container finished" podID="d70d6c5e-1949-453c-9a01-1434441de454" containerID="5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0" exitCode=0 Oct 14 16:18:16 crc kubenswrapper[4860]: I1014 16:18:16.420365 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerDied","Data":"5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0"} Oct 14 16:18:20 crc kubenswrapper[4860]: I1014 16:18:20.829748 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-777489d894-44kqm_c1b85a60-532b-442f-ab52-86a88e9e2400/barbican-api/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.077981 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8647888b98-65v2r_ff1ff7d7-b307-4f43-a76a-09da21f5fd05/barbican-keystone-listener/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.104189 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-777489d894-44kqm_c1b85a60-532b-442f-ab52-86a88e9e2400/barbican-api-log/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.244976 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8647888b98-65v2r_ff1ff7d7-b307-4f43-a76a-09da21f5fd05/barbican-keystone-listener-log/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.318466 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-98dc5ccc5-l88l9_ef6678e8-7116-4dc1-a7cd-420317d521eb/barbican-worker/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.458506 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-98dc5ccc5-l88l9_ef6678e8-7116-4dc1-a7cd-420317d521eb/barbican-worker-log/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.463570 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerStarted","Data":"8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2"} Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.465310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerStarted","Data":"7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72"} Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.467273 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerStarted","Data":"a501a5c3ff0357a158ab9dc67ac9c087f4f34e425f8fbdd57fc83e94f392cf98"} Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.676366 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8nt9j_18e270ba-e48c-4f9e-bc6a-8269b31f5698/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.749657 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/ceilometer-central-agent/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.921727 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/proxy-httpd/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.968630 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/ceilometer-notification-agent/0.log" Oct 14 16:18:21 crc kubenswrapper[4860]: I1014 16:18:21.991821 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee/sg-core/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.194536 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c1c38bae-5346-4f5a-ad7c-24f82dd147cf/cinder-api/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.226506 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c1c38bae-5346-4f5a-ad7c-24f82dd147cf/cinder-api-log/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.342926 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_33f4677b-3c11-4662-9129-35805ee9cab0/cinder-scheduler/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.500268 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_33f4677b-3c11-4662-9129-35805ee9cab0/probe/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.510986 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-kbbxq_72789ed5-d4cd-4245-ad23-5114f65ab462/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.676313 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9sdm6_fd03522b-4930-4c43-ae91-76bd6891424a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.773539 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mjlnk_8815aac7-80df-436c-ad49-c49907b6ed3c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:22 crc kubenswrapper[4860]: I1014 16:18:22.904333 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-bh2nm_2973f190-e42c-4031-9746-70704bafe957/init/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.244624 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-bh2nm_2973f190-e42c-4031-9746-70704bafe957/init/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.266704 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sk6xq_3b6f14ce-02b7-4b0c-91f7-de180b724b23/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.458501 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-bh2nm_2973f190-e42c-4031-9746-70704bafe957/dnsmasq-dns/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.543749 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263/glance-log/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.868223 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9eea5159-5fa7-4ef7-a7c3-4f98d05085e3/glance-log/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.902571 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_16fc54e6-69a7-4cd1-8cf0-e7a7c7b22263/glance-httpd/0.log" Oct 14 16:18:23 crc kubenswrapper[4860]: I1014 16:18:23.909497 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9eea5159-5fa7-4ef7-a7c3-4f98d05085e3/glance-httpd/0.log" Oct 14 16:18:24 crc kubenswrapper[4860]: I1014 16:18:24.220913 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8795558b4-cgsrj_ba50439f-28b5-4b76-9afb-b705c4037f8d/horizon/1.log" Oct 14 16:18:24 crc kubenswrapper[4860]: I1014 16:18:24.254576 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8795558b4-cgsrj_ba50439f-28b5-4b76-9afb-b705c4037f8d/horizon/0.log" Oct 14 16:18:24 crc kubenswrapper[4860]: I1014 16:18:24.559115 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nl8rk_21809a83-1209-4a97-a550-1dfcccd04ec3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:24 crc kubenswrapper[4860]: I1014 16:18:24.676587 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dq8ms_1e540b72-fca1-4c14-8830-8fa070543f8c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:24 crc kubenswrapper[4860]: I1014 16:18:24.989974 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340961-lsqnc_e39034f1-fd48-4b12-a14c-55abc2828764/keystone-cron/0.log" Oct 14 16:18:25 crc kubenswrapper[4860]: I1014 16:18:25.081536 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8795558b4-cgsrj_ba50439f-28b5-4b76-9afb-b705c4037f8d/horizon-log/0.log" Oct 14 16:18:25 crc kubenswrapper[4860]: I1014 16:18:25.126598 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6922ab3e-5c2c-43d1-8b29-824fd8c4146c/kube-state-metrics/0.log" Oct 14 16:18:25 crc kubenswrapper[4860]: I1014 16:18:25.343883 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dxhbb_ad612cd6-7c9d-44c4-aa1e-33055de4eee6/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:25 crc kubenswrapper[4860]: I1014 16:18:25.510087 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79ffbddbb5-96v5k_17bac919-7f29-4225-967b-1001b22075b4/keystone-api/0.log" Oct 14 16:18:25 crc kubenswrapper[4860]: I1014 16:18:25.825692 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vxxjl_3601c2b8-7185-42fa-bbe1-b0e6b1e07332/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:26 crc kubenswrapper[4860]: I1014 16:18:26.102970 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4bf5b577-882p6_87973523-835b-4676-babb-8ed122fa8b93/neutron-httpd/0.log" Oct 14 16:18:26 crc kubenswrapper[4860]: I1014 16:18:26.357150 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4bf5b577-882p6_87973523-835b-4676-babb-8ed122fa8b93/neutron-api/0.log" Oct 14 16:18:26 crc kubenswrapper[4860]: I1014 16:18:26.520953 4860 generic.go:334] "Generic (PLEG): container finished" podID="d70d6c5e-1949-453c-9a01-1434441de454" containerID="8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2" exitCode=0 Oct 14 16:18:26 crc kubenswrapper[4860]: I1014 16:18:26.521183 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerDied","Data":"8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2"} Oct 14 16:18:27 crc kubenswrapper[4860]: I1014 16:18:27.330608 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0dd800a1-57e1-4a3b-994b-304c941b9e5e/nova-cell0-conductor-conductor/0.log" Oct 14 16:18:27 crc kubenswrapper[4860]: I1014 16:18:27.446357 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_37eab7d3-1474-46a2-85f7-9f874511aea2/nova-cell1-conductor-conductor/0.log" Oct 14 16:18:27 crc kubenswrapper[4860]: I1014 16:18:27.812986 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_be5646ba-6f94-4628-85ef-5091fee066d5/nova-api-log/0.log" Oct 14 16:18:27 crc kubenswrapper[4860]: I1014 16:18:27.986271 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d40bc087-e558-4107-8e76-b5daa3ff73c1/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 16:18:28 crc kubenswrapper[4860]: I1014 16:18:28.154771 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_be5646ba-6f94-4628-85ef-5091fee066d5/nova-api-api/0.log" Oct 14 16:18:28 crc kubenswrapper[4860]: I1014 16:18:28.199714 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-52bv4_5ea863c9-1241-4529-b07a-7ded53a8a9ca/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:28 crc kubenswrapper[4860]: I1014 16:18:28.543597 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerStarted","Data":"506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709"} Oct 14 16:18:28 crc kubenswrapper[4860]: I1014 16:18:28.570261 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tc2tp" podStartSLOduration=11.344404213 podStartE2EDuration="22.570242812s" podCreationTimestamp="2025-10-14 16:18:06 +0000 UTC" firstStartedPulling="2025-10-14 16:18:16.422216996 +0000 UTC m=+5358.009000465" lastFinishedPulling="2025-10-14 16:18:27.648055615 +0000 UTC m=+5369.234839064" observedRunningTime="2025-10-14 16:18:28.562418532 +0000 UTC m=+5370.149201981" watchObservedRunningTime="2025-10-14 16:18:28.570242812 +0000 UTC m=+5370.157026261" Oct 14 16:18:28 crc kubenswrapper[4860]: I1014 16:18:28.577143 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5/nova-metadata-log/0.log" Oct 14 16:18:28 crc kubenswrapper[4860]: I1014 16:18:28.893121 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_84bf98f8-38a7-469a-a6ce-f3b573aa1356/mysql-bootstrap/0.log" Oct 14 16:18:29 crc kubenswrapper[4860]: I1014 16:18:29.252330 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_84bf98f8-38a7-469a-a6ce-f3b573aa1356/galera/0.log" Oct 14 16:18:29 crc kubenswrapper[4860]: I1014 16:18:29.305511 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_84bf98f8-38a7-469a-a6ce-f3b573aa1356/mysql-bootstrap/0.log" Oct 14 16:18:29 crc kubenswrapper[4860]: I1014 16:18:29.653125 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0619b1f4-ea36-41ab-a97b-2a97d516e53c/mysql-bootstrap/0.log" Oct 14 16:18:30 crc kubenswrapper[4860]: I1014 16:18:30.126871 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0619b1f4-ea36-41ab-a97b-2a97d516e53c/mysql-bootstrap/0.log" Oct 14 16:18:30 crc kubenswrapper[4860]: I1014 16:18:30.226055 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0619b1f4-ea36-41ab-a97b-2a97d516e53c/galera/0.log" Oct 14 16:18:30 crc kubenswrapper[4860]: I1014 16:18:30.451921 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0923e67e-dcfe-48bd-9987-c24810447a3e/openstackclient/0.log" Oct 14 16:18:30 crc kubenswrapper[4860]: I1014 16:18:30.668329 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s4vnv_cb8d65af-6ce5-4a61-ad15-c32aeb71c190/openstack-network-exporter/0.log" Oct 14 16:18:30 crc kubenswrapper[4860]: I1014 16:18:30.709940 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_edadf2e8-459f-4994-a1f8-a059cbdb46c6/nova-scheduler-scheduler/0.log" Oct 14 16:18:30 crc kubenswrapper[4860]: I1014 16:18:30.966225 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovsdb-server-init/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.213515 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovs-vswitchd/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.218677 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovsdb-server/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.251775 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vbhtr_517eb23f-ec49-4288-a019-df9ac4da8ccd/ovsdb-server-init/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.262086 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e1edc60-0adf-45a1-ab4a-caa4ffc5cbd5/nova-metadata-metadata/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.565378 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_03a44669-ea47-471b-a369-93f6f85bec6b/memcached/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.605147 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sc6wm_8fbd86ca-1d38-4b27-bd36-62198c367b3d/ovn-controller/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.798930 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jm89w_758f6aec-34fc-48fc-a6bb-f6ac287a02d0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.806487 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9be28925-a379-4ecf-8021-5a16dbd9b666/openstack-network-exporter/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.846001 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9be28925-a379-4ecf-8021-5a16dbd9b666/ovn-northd/0.log" Oct 14 16:18:31 crc kubenswrapper[4860]: I1014 16:18:31.992897 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9ea3e827-d3d5-481d-b8f6-90b20be97f2e/openstack-network-exporter/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.112192 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9ea3e827-d3d5-481d-b8f6-90b20be97f2e/ovsdbserver-nb/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.122760 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ac3dbbff-ef4c-461d-b2a0-58284b598cb4/openstack-network-exporter/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.207630 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ac3dbbff-ef4c-461d-b2a0-58284b598cb4/ovsdbserver-sb/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.425341 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b636d89a-e295-48f6-8679-c6c7b0f998cf/setup-container/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.675901 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b636d89a-e295-48f6-8679-c6c7b0f998cf/setup-container/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.706857 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64cf955b6-w5x5t_ab2aac74-c03a-4d14-a332-ab84606c9864/placement-api/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.753987 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b636d89a-e295-48f6-8679-c6c7b0f998cf/rabbitmq/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.900364 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64cf955b6-w5x5t_ab2aac74-c03a-4d14-a332-ab84606c9864/placement-log/0.log" Oct 14 16:18:32 crc kubenswrapper[4860]: I1014 16:18:32.905335 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7bde387-0de9-44df-84cf-3db5e96019c9/setup-container/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.139521 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7bde387-0de9-44df-84cf-3db5e96019c9/setup-container/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.247598 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7bde387-0de9-44df-84cf-3db5e96019c9/rabbitmq/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.276469 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-msl4k_bf875e18-0a4b-4caf-85e0-fe7d96ace688/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.488070 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-c77ns_a935dc27-6373-4538-8676-b2532a79575c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.573273 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4jpw4_9af1a0e5-8c28-4be6-8906-f60775a83304/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.630271 4860 generic.go:334] "Generic (PLEG): container finished" podID="fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" containerID="a501a5c3ff0357a158ab9dc67ac9c087f4f34e425f8fbdd57fc83e94f392cf98" exitCode=0 Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.630318 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerDied","Data":"a501a5c3ff0357a158ab9dc67ac9c087f4f34e425f8fbdd57fc83e94f392cf98"} Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.632277 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5g7xk_442b40ad-4a75-4690-ab2a-a63194e46aac/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:33 crc kubenswrapper[4860]: I1014 16:18:33.812449 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-47tvx_b3b6bfde-9f16-4803-8b4c-2aba73c9612f/ssh-known-hosts-edpm-deployment/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.055393 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76b8bb94b7-r2cx7_b791e9e4-1b27-429a-9811-2b956a974e3a/proxy-httpd/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.070533 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76b8bb94b7-r2cx7_b791e9e4-1b27-429a-9811-2b956a974e3a/proxy-server/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.165990 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-b2bd2_4cc19e55-2664-49bd-8f7e-856d1c9b3ecd/swift-ring-rebalance/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.308345 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-reaper/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.313232 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-auditor/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.436134 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-server/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.549931 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/account-replicator/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.576330 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-replicator/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.628065 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-auditor/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.677145 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-server/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.690381 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/container-updater/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.864359 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-expirer/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.899762 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-replicator/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.912993 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-auditor/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.934927 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-server/0.log" Oct 14 16:18:34 crc kubenswrapper[4860]: I1014 16:18:34.995806 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/object-updater/0.log" Oct 14 16:18:35 crc kubenswrapper[4860]: I1014 16:18:35.062771 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/rsync/0.log" Oct 14 16:18:35 crc kubenswrapper[4860]: I1014 16:18:35.147895 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e7daefc0-ac71-4a73-9da7-7cf2fecfaf4a/swift-recon-cron/0.log" Oct 14 16:18:35 crc kubenswrapper[4860]: I1014 16:18:35.413310 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ccedbfab-f66d-49a5-baac-50c603e57c98/tempest-tests-tempest-tests-runner/0.log" Oct 14 16:18:35 crc kubenswrapper[4860]: I1014 16:18:35.553740 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b9379888-4451-403a-adb4-c9b17890351a/test-operator-logs-container/0.log" Oct 14 16:18:35 crc kubenswrapper[4860]: I1014 16:18:35.689137 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dsbdw_567e371c-991d-4515-98bf-b17f6573a744/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:35 crc kubenswrapper[4860]: I1014 16:18:35.749390 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-22rlb_487e54e1-aee7-4e2c-abdd-903ea61b0b11/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 16:18:37 crc kubenswrapper[4860]: I1014 16:18:37.793244 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:37 crc kubenswrapper[4860]: I1014 16:18:37.793644 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:38 crc kubenswrapper[4860]: I1014 16:18:38.670788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerStarted","Data":"0712a1543941fc148211b87800337cb9c50565b03127bca906c773d8bdf29aa8"} Oct 14 16:18:38 crc kubenswrapper[4860]: I1014 16:18:38.694471 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rzg2h" podStartSLOduration=8.626113651 podStartE2EDuration="29.694450459s" podCreationTimestamp="2025-10-14 16:18:09 +0000 UTC" firstStartedPulling="2025-10-14 16:18:16.419574732 +0000 UTC m=+5358.006358201" lastFinishedPulling="2025-10-14 16:18:37.48791156 +0000 UTC m=+5379.074695009" observedRunningTime="2025-10-14 16:18:38.687443279 +0000 UTC m=+5380.274226728" watchObservedRunningTime="2025-10-14 16:18:38.694450459 +0000 UTC m=+5380.281233908" Oct 14 16:18:38 crc kubenswrapper[4860]: I1014 16:18:38.856680 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tc2tp" podUID="d70d6c5e-1949-453c-9a01-1434441de454" containerName="registry-server" probeResult="failure" output=< Oct 14 16:18:38 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 16:18:38 crc kubenswrapper[4860]: > Oct 14 16:18:40 crc kubenswrapper[4860]: I1014 16:18:40.205824 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:40 crc kubenswrapper[4860]: I1014 16:18:40.206614 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:40 crc kubenswrapper[4860]: I1014 16:18:40.689420 4860 generic.go:334] "Generic (PLEG): container finished" podID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" containerID="7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72" exitCode=0 Oct 14 16:18:40 crc kubenswrapper[4860]: I1014 16:18:40.689461 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerDied","Data":"7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72"} Oct 14 16:18:41 crc kubenswrapper[4860]: I1014 16:18:41.282603 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rzg2h" podUID="fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" containerName="registry-server" probeResult="failure" output=< Oct 14 16:18:41 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 16:18:41 crc kubenswrapper[4860]: > Oct 14 16:18:41 crc kubenswrapper[4860]: I1014 16:18:41.721709 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerStarted","Data":"54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e"} Oct 14 16:18:41 crc kubenswrapper[4860]: I1014 16:18:41.761349 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nrzb8" podStartSLOduration=9.766869102 podStartE2EDuration="34.761331548s" podCreationTimestamp="2025-10-14 16:18:07 +0000 UTC" firstStartedPulling="2025-10-14 16:18:16.417079361 +0000 UTC m=+5358.003862830" lastFinishedPulling="2025-10-14 16:18:41.411541827 +0000 UTC m=+5382.998325276" observedRunningTime="2025-10-14 16:18:41.757475585 +0000 UTC m=+5383.344259044" watchObservedRunningTime="2025-10-14 16:18:41.761331548 +0000 UTC m=+5383.348114997" Oct 14 16:18:47 crc kubenswrapper[4860]: I1014 16:18:47.793457 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:47 crc kubenswrapper[4860]: I1014 16:18:47.794012 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:18:47 crc kubenswrapper[4860]: I1014 16:18:47.854510 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:47 crc kubenswrapper[4860]: I1014 16:18:47.903045 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:48 crc kubenswrapper[4860]: I1014 16:18:48.099342 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc2tp"] Oct 14 16:18:48 crc kubenswrapper[4860]: I1014 16:18:48.861499 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nrzb8" podUID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" containerName="registry-server" probeResult="failure" output=< Oct 14 16:18:48 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 16:18:48 crc kubenswrapper[4860]: > Oct 14 16:18:49 crc kubenswrapper[4860]: I1014 16:18:49.792305 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tc2tp" podUID="d70d6c5e-1949-453c-9a01-1434441de454" containerName="registry-server" containerID="cri-o://506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709" gracePeriod=2 Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.277232 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.352601 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.380784 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.474714 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-utilities\") pod \"d70d6c5e-1949-453c-9a01-1434441de454\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.474836 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-catalog-content\") pod \"d70d6c5e-1949-453c-9a01-1434441de454\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.475025 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmrxh\" (UniqueName: \"kubernetes.io/projected/d70d6c5e-1949-453c-9a01-1434441de454-kube-api-access-rmrxh\") pod \"d70d6c5e-1949-453c-9a01-1434441de454\" (UID: \"d70d6c5e-1949-453c-9a01-1434441de454\") " Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.475203 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-utilities" (OuterVolumeSpecName: "utilities") pod "d70d6c5e-1949-453c-9a01-1434441de454" (UID: "d70d6c5e-1949-453c-9a01-1434441de454"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.475461 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.483362 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70d6c5e-1949-453c-9a01-1434441de454-kube-api-access-rmrxh" (OuterVolumeSpecName: "kube-api-access-rmrxh") pod "d70d6c5e-1949-453c-9a01-1434441de454" (UID: "d70d6c5e-1949-453c-9a01-1434441de454"). InnerVolumeSpecName "kube-api-access-rmrxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.505227 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d70d6c5e-1949-453c-9a01-1434441de454" (UID: "d70d6c5e-1949-453c-9a01-1434441de454"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.577384 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmrxh\" (UniqueName: \"kubernetes.io/projected/d70d6c5e-1949-453c-9a01-1434441de454-kube-api-access-rmrxh\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.577683 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70d6c5e-1949-453c-9a01-1434441de454-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.816758 4860 generic.go:334] "Generic (PLEG): container finished" podID="d70d6c5e-1949-453c-9a01-1434441de454" containerID="506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709" exitCode=0 Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.817406 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc2tp" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.817657 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerDied","Data":"506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709"} Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.817745 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc2tp" event={"ID":"d70d6c5e-1949-453c-9a01-1434441de454","Type":"ContainerDied","Data":"4bdfc5137211ed2b595e514455bdad1751bdbb2d6099b80308c965cf3103f668"} Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.817863 4860 scope.go:117] "RemoveContainer" containerID="506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.864093 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc2tp"] Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.869054 4860 scope.go:117] "RemoveContainer" containerID="8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.873665 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc2tp"] Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.901425 4860 scope.go:117] "RemoveContainer" containerID="5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.943846 4860 scope.go:117] "RemoveContainer" containerID="506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709" Oct 14 16:18:50 crc kubenswrapper[4860]: E1014 16:18:50.944964 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709\": container with ID starting with 506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709 not found: ID does not exist" containerID="506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.945123 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709"} err="failed to get container status \"506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709\": rpc error: code = NotFound desc = could not find container \"506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709\": container with ID starting with 506ba0a87a4f116a6cf31476d7a636979cb5e69dd7f5bbed33fa0b7f3f172709 not found: ID does not exist" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.945222 4860 scope.go:117] "RemoveContainer" containerID="8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2" Oct 14 16:18:50 crc kubenswrapper[4860]: E1014 16:18:50.945977 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2\": container with ID starting with 8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2 not found: ID does not exist" containerID="8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.946142 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2"} err="failed to get container status \"8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2\": rpc error: code = NotFound desc = could not find container \"8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2\": container with ID starting with 8d2af52e67e31000f71ba136b2e4d5880d281573c13ea7e45cd59b906ecb86d2 not found: ID does not exist" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.946244 4860 scope.go:117] "RemoveContainer" containerID="5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0" Oct 14 16:18:50 crc kubenswrapper[4860]: E1014 16:18:50.946565 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0\": container with ID starting with 5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0 not found: ID does not exist" containerID="5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0" Oct 14 16:18:50 crc kubenswrapper[4860]: I1014 16:18:50.946666 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0"} err="failed to get container status \"5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0\": rpc error: code = NotFound desc = could not find container \"5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0\": container with ID starting with 5f192e39c8b614c0c6bb6fa9c7d271fd13b6653ba2d65578ed0ab24b24573ea0 not found: ID does not exist" Oct 14 16:18:51 crc kubenswrapper[4860]: I1014 16:18:51.072708 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70d6c5e-1949-453c-9a01-1434441de454" path="/var/lib/kubelet/pods/d70d6c5e-1949-453c-9a01-1434441de454/volumes" Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.497570 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rzg2h"] Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.498089 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rzg2h" podUID="fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" containerName="registry-server" containerID="cri-o://0712a1543941fc148211b87800337cb9c50565b03127bca906c773d8bdf29aa8" gracePeriod=2 Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.842816 4860 generic.go:334] "Generic (PLEG): container finished" podID="fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" containerID="0712a1543941fc148211b87800337cb9c50565b03127bca906c773d8bdf29aa8" exitCode=0 Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.843221 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerDied","Data":"0712a1543941fc148211b87800337cb9c50565b03127bca906c773d8bdf29aa8"} Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.843255 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzg2h" event={"ID":"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b","Type":"ContainerDied","Data":"2a29fcf20ba0bb54aaf7254faf5e067a2fa3cfad75b30315bacf89a87613a5a6"} Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.843269 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a29fcf20ba0bb54aaf7254faf5e067a2fa3cfad75b30315bacf89a87613a5a6" Oct 14 16:18:52 crc kubenswrapper[4860]: I1014 16:18:52.917481 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.029223 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-catalog-content\") pod \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.029409 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b945p\" (UniqueName: \"kubernetes.io/projected/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-kube-api-access-b945p\") pod \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.029475 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-utilities\") pod \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\" (UID: \"fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b\") " Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.030166 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-utilities" (OuterVolumeSpecName: "utilities") pod "fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" (UID: "fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.035453 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-kube-api-access-b945p" (OuterVolumeSpecName: "kube-api-access-b945p") pod "fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" (UID: "fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b"). InnerVolumeSpecName "kube-api-access-b945p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.076990 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" (UID: "fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.140042 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.140069 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.140079 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b945p\" (UniqueName: \"kubernetes.io/projected/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b-kube-api-access-b945p\") on node \"crc\" DevicePath \"\"" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.850731 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzg2h" Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.881679 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rzg2h"] Oct 14 16:18:53 crc kubenswrapper[4860]: I1014 16:18:53.890451 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rzg2h"] Oct 14 16:18:55 crc kubenswrapper[4860]: I1014 16:18:55.088616 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b" path="/var/lib/kubelet/pods/fce5f4a7-c2be-4965-ade2-43c5ddcf9b0b/volumes" Oct 14 16:18:58 crc kubenswrapper[4860]: I1014 16:18:58.855967 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nrzb8" podUID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" containerName="registry-server" probeResult="failure" output=< Oct 14 16:18:58 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Oct 14 16:18:58 crc kubenswrapper[4860]: > Oct 14 16:18:59 crc kubenswrapper[4860]: I1014 16:18:59.245881 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:18:59 crc kubenswrapper[4860]: I1014 16:18:59.245967 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:19:01 crc kubenswrapper[4860]: I1014 16:19:01.742665 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-lwpwz_95d281e4-c140-42c3-ba4e-3d36e98bb29c/manager/0.log" Oct 14 16:19:01 crc kubenswrapper[4860]: I1014 16:19:01.747540 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-lwpwz_95d281e4-c140-42c3-ba4e-3d36e98bb29c/kube-rbac-proxy/0.log" Oct 14 16:19:01 crc kubenswrapper[4860]: I1014 16:19:01.948779 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/util/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.125434 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/util/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.143393 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/pull/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.173332 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/pull/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.354237 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/extract/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.426613 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/util/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.427677 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7465e9480f41b38398b1061ca94235bb698aaac886dc431cb5d83cc37fdslb_940211d1-5595-4283-b049-57cc681b2ffc/pull/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.551731 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-6lzwd_8680f35c-eae8-49e0-a670-d4b467a987f0/kube-rbac-proxy/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.576098 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-6lzwd_8680f35c-eae8-49e0-a670-d4b467a987f0/manager/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.663990 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-f2rfg_6b31fe2f-695e-4b8b-b632-7075e4a9740f/kube-rbac-proxy/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.749611 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-f2rfg_6b31fe2f-695e-4b8b-b632-7075e4a9740f/manager/0.log" Oct 14 16:19:02 crc kubenswrapper[4860]: I1014 16:19:02.886233 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-dgxfp_65912b78-7ceb-4bd0-ab72-70fd3574b786/kube-rbac-proxy/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.031464 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-dgxfp_65912b78-7ceb-4bd0-ab72-70fd3574b786/manager/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.098300 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-8mngx_e24ba4ef-9297-4d61-a338-941ce00a2391/kube-rbac-proxy/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.120760 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-8mngx_e24ba4ef-9297-4d61-a338-941ce00a2391/manager/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.251868 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-mdd5z_95d178d8-e3b2-4141-91af-b82fa61bd86a/kube-rbac-proxy/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.338276 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-mdd5z_95d178d8-e3b2-4141-91af-b82fa61bd86a/manager/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.571983 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-hpkm4_1b0c2826-792e-44ca-9bc1-830aefee72d6/kube-rbac-proxy/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.656689 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-hpkm4_1b0c2826-792e-44ca-9bc1-830aefee72d6/manager/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.744128 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-l9w8x_e3456832-68ce-443e-825f-9d6af6cf829f/kube-rbac-proxy/0.log" Oct 14 16:19:03 crc kubenswrapper[4860]: I1014 16:19:03.878449 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-l9w8x_e3456832-68ce-443e-825f-9d6af6cf829f/manager/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.037319 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8ht4q_4bbd7b36-79fe-423b-a5c6-2237390dea3f/kube-rbac-proxy/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.222136 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8ht4q_4bbd7b36-79fe-423b-a5c6-2237390dea3f/manager/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.313229 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-bc4x8_786a4f8b-062c-46b7-8028-5079481427db/kube-rbac-proxy/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.394303 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-bc4x8_786a4f8b-062c-46b7-8028-5079481427db/manager/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.502502 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-2rpj7_5397040a-47ac-487d-8e5a-8fd02d6ec654/kube-rbac-proxy/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.597897 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-2rpj7_5397040a-47ac-487d-8e5a-8fd02d6ec654/manager/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.817160 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-lfntw_1e6c58c7-4e05-4c8d-98f0-2063b1ba613f/kube-rbac-proxy/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.879920 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-lfntw_1e6c58c7-4e05-4c8d-98f0-2063b1ba613f/manager/0.log" Oct 14 16:19:04 crc kubenswrapper[4860]: I1014 16:19:04.950272 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-tw4ph_1f864c3d-2e54-459b-b613-3785d0cf4ae6/kube-rbac-proxy/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.115142 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-5l2qq_ad189aa9-4e21-4d7e-b1de-83497bd83376/kube-rbac-proxy/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.158545 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-5l2qq_ad189aa9-4e21-4d7e-b1de-83497bd83376/manager/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.177148 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-tw4ph_1f864c3d-2e54-459b-b613-3785d0cf4ae6/manager/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.307920 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt_df4d54ec-6345-4b47-8ae8-58ae0bf6da7f/kube-rbac-proxy/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.347122 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d9dpbt_df4d54ec-6345-4b47-8ae8-58ae0bf6da7f/manager/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.547873 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768555b76-hzfmn_c584f96e-f636-458e-9aca-f953ccf4a900/kube-rbac-proxy/0.log" Oct 14 16:19:05 crc kubenswrapper[4860]: I1014 16:19:05.762283 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bc7d8f4c-hjwzs_88da4870-694b-46ba-9fda-5e85357bcb5e/kube-rbac-proxy/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.011109 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nv67r_24580674-ab2e-46af-b79a-e2396d8f61a5/registry-server/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.042760 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bc7d8f4c-hjwzs_88da4870-694b-46ba-9fda-5e85357bcb5e/operator/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.198024 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-4kql9_3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5/kube-rbac-proxy/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.359811 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-4kql9_3d202f65-a2f2-4200-b3ea-e7a78ca5d5a5/manager/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.432775 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-l6rbl_d0ac64a4-cdc5-4362-9359-712291fafbdf/kube-rbac-proxy/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.574717 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-l6rbl_d0ac64a4-cdc5-4362-9359-712291fafbdf/manager/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.688149 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768555b76-hzfmn_c584f96e-f636-458e-9aca-f953ccf4a900/manager/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.836504 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-gthfm_0bfbfdd2-7b80-46dc-a353-0f5858f0ae4b/operator/0.log" Oct 14 16:19:06 crc kubenswrapper[4860]: I1014 16:19:06.977618 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-rpjh4_4450d3fe-e520-48c6-ac1d-25344bdedc5e/kube-rbac-proxy/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.038148 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-rpjh4_4450d3fe-e520-48c6-ac1d-25344bdedc5e/manager/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.146539 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xpq8w_572e90ee-e3d4-44a0-b3c5-d0005f4cb41c/kube-rbac-proxy/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.195298 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xpq8w_572e90ee-e3d4-44a0-b3c5-d0005f4cb41c/manager/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.262762 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-9m7mm_f9603eeb-cc1b-4dc8-82e6-9cf64109c774/kube-rbac-proxy/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.318238 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-9m7mm_f9603eeb-cc1b-4dc8-82e6-9cf64109c774/manager/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.366630 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-lzb7d_9d1ea96c-cdba-4586-ae97-c008ff1ed05e/kube-rbac-proxy/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.433733 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-lzb7d_9d1ea96c-cdba-4586-ae97-c008ff1ed05e/manager/0.log" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.842552 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:19:07 crc kubenswrapper[4860]: I1014 16:19:07.886551 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.082497 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nrzb8"] Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.083080 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nrzb8" podUID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" containerName="registry-server" containerID="cri-o://54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e" gracePeriod=2 Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.546115 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.669183 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pftpj\" (UniqueName: \"kubernetes.io/projected/5d29a84e-87a8-4728-ad34-2dd3655a8d33-kube-api-access-pftpj\") pod \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.669336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-catalog-content\") pod \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.669384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-utilities\") pod \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\" (UID: \"5d29a84e-87a8-4728-ad34-2dd3655a8d33\") " Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.670118 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-utilities" (OuterVolumeSpecName: "utilities") pod "5d29a84e-87a8-4728-ad34-2dd3655a8d33" (UID: "5d29a84e-87a8-4728-ad34-2dd3655a8d33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.675347 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d29a84e-87a8-4728-ad34-2dd3655a8d33-kube-api-access-pftpj" (OuterVolumeSpecName: "kube-api-access-pftpj") pod "5d29a84e-87a8-4728-ad34-2dd3655a8d33" (UID: "5d29a84e-87a8-4728-ad34-2dd3655a8d33"). InnerVolumeSpecName "kube-api-access-pftpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.768452 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d29a84e-87a8-4728-ad34-2dd3655a8d33" (UID: "5d29a84e-87a8-4728-ad34-2dd3655a8d33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.771598 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.771643 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d29a84e-87a8-4728-ad34-2dd3655a8d33-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 16:19:09 crc kubenswrapper[4860]: I1014 16:19:09.771656 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pftpj\" (UniqueName: \"kubernetes.io/projected/5d29a84e-87a8-4728-ad34-2dd3655a8d33-kube-api-access-pftpj\") on node \"crc\" DevicePath \"\"" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.014286 4860 generic.go:334] "Generic (PLEG): container finished" podID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" containerID="54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e" exitCode=0 Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.014324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerDied","Data":"54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e"} Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.014348 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrzb8" event={"ID":"5d29a84e-87a8-4728-ad34-2dd3655a8d33","Type":"ContainerDied","Data":"9da282a2435533988edb6b205bf5bedadfcd3301bde7ea6e25dcf29c8e30183e"} Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.014363 4860 scope.go:117] "RemoveContainer" containerID="54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.014470 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrzb8" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.040834 4860 scope.go:117] "RemoveContainer" containerID="7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.052107 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nrzb8"] Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.062613 4860 scope.go:117] "RemoveContainer" containerID="845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.065395 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nrzb8"] Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.104897 4860 scope.go:117] "RemoveContainer" containerID="54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e" Oct 14 16:19:10 crc kubenswrapper[4860]: E1014 16:19:10.105477 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e\": container with ID starting with 54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e not found: ID does not exist" containerID="54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.105508 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e"} err="failed to get container status \"54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e\": rpc error: code = NotFound desc = could not find container \"54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e\": container with ID starting with 54ad4319e1bb704d02f517ee8334ab8138cde6d995ff253bc3c93ebec9093c4e not found: ID does not exist" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.105529 4860 scope.go:117] "RemoveContainer" containerID="7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72" Oct 14 16:19:10 crc kubenswrapper[4860]: E1014 16:19:10.105759 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72\": container with ID starting with 7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72 not found: ID does not exist" containerID="7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.105785 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72"} err="failed to get container status \"7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72\": rpc error: code = NotFound desc = could not find container \"7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72\": container with ID starting with 7634c2ce744788e27cc4640441573e0bcb11bdd3399297c12fe4e50f6d50af72 not found: ID does not exist" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.105798 4860 scope.go:117] "RemoveContainer" containerID="845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8" Oct 14 16:19:10 crc kubenswrapper[4860]: E1014 16:19:10.106014 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8\": container with ID starting with 845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8 not found: ID does not exist" containerID="845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8" Oct 14 16:19:10 crc kubenswrapper[4860]: I1014 16:19:10.106047 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8"} err="failed to get container status \"845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8\": rpc error: code = NotFound desc = could not find container \"845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8\": container with ID starting with 845c59f61175f5cbcba5150a687eaf4a6bb6a476cb1fbcad02c2d5e1c0573da8 not found: ID does not exist" Oct 14 16:19:11 crc kubenswrapper[4860]: I1014 16:19:11.071969 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d29a84e-87a8-4728-ad34-2dd3655a8d33" path="/var/lib/kubelet/pods/5d29a84e-87a8-4728-ad34-2dd3655a8d33/volumes" Oct 14 16:19:24 crc kubenswrapper[4860]: I1014 16:19:24.141131 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ql4q7_f5b47471-c477-482c-8462-62edd00df3bc/control-plane-machine-set-operator/0.log" Oct 14 16:19:24 crc kubenswrapper[4860]: I1014 16:19:24.282235 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jnwqb_8e925912-cc05-4c2b-8de7-ba05cd298123/kube-rbac-proxy/0.log" Oct 14 16:19:24 crc kubenswrapper[4860]: I1014 16:19:24.412952 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jnwqb_8e925912-cc05-4c2b-8de7-ba05cd298123/machine-api-operator/0.log" Oct 14 16:19:29 crc kubenswrapper[4860]: I1014 16:19:29.245580 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:19:29 crc kubenswrapper[4860]: I1014 16:19:29.247308 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:19:37 crc kubenswrapper[4860]: I1014 16:19:37.780363 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-z626q_d1972274-e4e4-4910-b996-98f16f66de5e/cert-manager-cainjector/0.log" Oct 14 16:19:37 crc kubenswrapper[4860]: I1014 16:19:37.793627 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d96mc_ce7fd78e-7ed7-450e-bca7-ca9075b12a25/cert-manager-controller/0.log" Oct 14 16:19:37 crc kubenswrapper[4860]: I1014 16:19:37.950710 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-xjn4j_bbaa104e-a070-4f3d-8807-959b551312b9/cert-manager-webhook/0.log" Oct 14 16:19:50 crc kubenswrapper[4860]: I1014 16:19:50.040829 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-stgb7_e5be091b-1de9-4a04-80b5-68ddf4fc73da/nmstate-console-plugin/0.log" Oct 14 16:19:50 crc kubenswrapper[4860]: I1014 16:19:50.247885 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wjlnp_3e969961-ebc3-4830-b52c-bbeb744ea07e/nmstate-handler/0.log" Oct 14 16:19:50 crc kubenswrapper[4860]: I1014 16:19:50.258416 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-t966d_9bef2f42-22a8-4cde-8267-c890543fe82e/kube-rbac-proxy/0.log" Oct 14 16:19:50 crc kubenswrapper[4860]: I1014 16:19:50.301930 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-t966d_9bef2f42-22a8-4cde-8267-c890543fe82e/nmstate-metrics/0.log" Oct 14 16:19:50 crc kubenswrapper[4860]: I1014 16:19:50.509069 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-zn5lz_b7e911b9-3fd1-49b4-8716-70507a0b2aa4/nmstate-operator/0.log" Oct 14 16:19:50 crc kubenswrapper[4860]: I1014 16:19:50.579613 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-c6bxt_c941e868-49fb-4e89-896a-50f0dbbfe71b/nmstate-webhook/0.log" Oct 14 16:19:59 crc kubenswrapper[4860]: I1014 16:19:59.246117 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:19:59 crc kubenswrapper[4860]: I1014 16:19:59.246752 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:19:59 crc kubenswrapper[4860]: I1014 16:19:59.246803 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 16:19:59 crc kubenswrapper[4860]: I1014 16:19:59.247707 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20e59eaa6be34b86827808ea8770025074b38ad785ce4c0eccd4f5e13bb7b741"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 16:19:59 crc kubenswrapper[4860]: I1014 16:19:59.247774 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://20e59eaa6be34b86827808ea8770025074b38ad785ce4c0eccd4f5e13bb7b741" gracePeriod=600 Oct 14 16:20:00 crc kubenswrapper[4860]: I1014 16:20:00.481727 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="20e59eaa6be34b86827808ea8770025074b38ad785ce4c0eccd4f5e13bb7b741" exitCode=0 Oct 14 16:20:00 crc kubenswrapper[4860]: I1014 16:20:00.481808 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"20e59eaa6be34b86827808ea8770025074b38ad785ce4c0eccd4f5e13bb7b741"} Oct 14 16:20:00 crc kubenswrapper[4860]: I1014 16:20:00.482391 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerStarted","Data":"b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb"} Oct 14 16:20:00 crc kubenswrapper[4860]: I1014 16:20:00.482415 4860 scope.go:117] "RemoveContainer" containerID="420e54bb4a129b169c82bef3346d955a2976fdc8282dd0960e34ca512290fcce" Oct 14 16:20:04 crc kubenswrapper[4860]: I1014 16:20:04.899450 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-np5tb_76eff526-1e46-4804-a3ef-dfdc845038d7/kube-rbac-proxy/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.107255 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-np5tb_76eff526-1e46-4804-a3ef-dfdc845038d7/controller/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.148852 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.466788 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.484205 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.507383 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.552295 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.727127 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.766933 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.784945 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:20:05 crc kubenswrapper[4860]: I1014 16:20:05.819364 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.027824 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/controller/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.032401 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-reloader/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.052197 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-frr-files/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.066765 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/cp-metrics/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.471414 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/frr-metrics/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.473233 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/kube-rbac-proxy/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.567110 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/kube-rbac-proxy-frr/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.731574 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/reloader/0.log" Oct 14 16:20:06 crc kubenswrapper[4860]: I1014 16:20:06.871744 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-zvcbb_db44a95a-8142-4353-affc-7227a205135c/frr-k8s-webhook-server/0.log" Oct 14 16:20:07 crc kubenswrapper[4860]: I1014 16:20:07.185545 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8575fd6987-wq9q7_03cac4a6-319b-4df3-baf8-82868fa438e5/manager/0.log" Oct 14 16:20:07 crc kubenswrapper[4860]: I1014 16:20:07.306637 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75cf49597f-kjngh_19ebf47b-7556-421f-bc1a-442040a5995c/webhook-server/0.log" Oct 14 16:20:07 crc kubenswrapper[4860]: I1014 16:20:07.554215 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tbgl7_9ded0ce9-6baf-429a-b3ad-493b2bfda7de/kube-rbac-proxy/0.log" Oct 14 16:20:08 crc kubenswrapper[4860]: I1014 16:20:08.384629 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7545s_63f2d00d-6dad-48ec-91c9-33ba7f88c5f2/frr/0.log" Oct 14 16:20:08 crc kubenswrapper[4860]: I1014 16:20:08.654755 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tbgl7_9ded0ce9-6baf-429a-b3ad-493b2bfda7de/speaker/0.log" Oct 14 16:20:20 crc kubenswrapper[4860]: I1014 16:20:20.723626 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/util/0.log" Oct 14 16:20:20 crc kubenswrapper[4860]: I1014 16:20:20.960719 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/pull/0.log" Oct 14 16:20:20 crc kubenswrapper[4860]: I1014 16:20:20.965185 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/pull/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.019045 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/util/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.379594 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/util/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.446633 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/pull/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.565196 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gz8h6_8f866932-2796-4d36-82ea-ffac60aee340/extract/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.631603 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-utilities/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.929604 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-content/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.957078 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-content/0.log" Oct 14 16:20:21 crc kubenswrapper[4860]: I1014 16:20:21.990066 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-utilities/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.174280 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-content/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.176149 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/extract-utilities/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.513794 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-48p8c_7547b7d4-7dbb-4f07-a064-8862a12c572c/registry-server/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.542444 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-utilities/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.736609 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-content/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.767699 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-utilities/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.786703 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-content/0.log" Oct 14 16:20:22 crc kubenswrapper[4860]: I1014 16:20:22.981835 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-utilities/0.log" Oct 14 16:20:23 crc kubenswrapper[4860]: I1014 16:20:23.019953 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/extract-content/0.log" Oct 14 16:20:23 crc kubenswrapper[4860]: I1014 16:20:23.329133 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/util/0.log" Oct 14 16:20:23 crc kubenswrapper[4860]: I1014 16:20:23.638644 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/util/0.log" Oct 14 16:20:23 crc kubenswrapper[4860]: I1014 16:20:23.654751 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/pull/0.log" Oct 14 16:20:23 crc kubenswrapper[4860]: I1014 16:20:23.704094 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/pull/0.log" Oct 14 16:20:23 crc kubenswrapper[4860]: I1014 16:20:23.916174 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m5zb9_06644532-4731-4669-9d9f-c26cfa66a0de/registry-server/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.007571 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/pull/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.043727 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/util/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.087488 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835csqbdg_4c172442-19ed-484a-8404-5a5373f066e1/extract/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.372899 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-utilities/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.384239 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4tjxk_4e88f73d-d331-4edf-903f-2930d09f8fd9/marketplace-operator/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.617586 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-content/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.635492 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-content/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.690198 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-utilities/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.908411 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-content/0.log" Oct 14 16:20:24 crc kubenswrapper[4860]: I1014 16:20:24.968460 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/extract-utilities/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.207073 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkgfs_8b57e8b6-5f6f-42fb-a3c2-53567553c663/registry-server/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.271008 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-utilities/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.405750 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-utilities/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.443146 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-content/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.468474 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-content/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.931263 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-content/0.log" Oct 14 16:20:25 crc kubenswrapper[4860]: I1014 16:20:25.966191 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/extract-utilities/0.log" Oct 14 16:20:26 crc kubenswrapper[4860]: I1014 16:20:26.616864 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsrz4_f5699bb2-6633-43ac-9d64-3b83f3471e4d/registry-server/0.log" Oct 14 16:22:28 crc kubenswrapper[4860]: I1014 16:22:28.154289 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podUID="1b0c2826-792e-44ca-9bc1-830aefee72d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 16:22:28 crc kubenswrapper[4860]: I1014 16:22:28.154316 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podUID="1b0c2826-792e-44ca-9bc1-830aefee72d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 16:22:29 crc kubenswrapper[4860]: I1014 16:22:29.245887 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:22:29 crc kubenswrapper[4860]: I1014 16:22:29.246445 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:22:31 crc kubenswrapper[4860]: I1014 16:22:31.648143 4860 patch_prober.go:28] interesting pod/route-controller-manager-7d4fb8b4bd-b47xc container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 16:22:31 crc kubenswrapper[4860]: I1014 16:22:31.648418 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d4fb8b4bd-b47xc" podUID="fd6ff5f6-6417-4957-a382-89378c84071d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 16:22:35 crc kubenswrapper[4860]: I1014 16:22:35.821261 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="0bc2faed-f1e5-4d65-80be-f4b0cdf1ffee" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 14 16:22:36 crc kubenswrapper[4860]: I1014 16:22:36.095164 4860 patch_prober.go:28] interesting pod/oauth-openshift-77df6bdc9c-n997h container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 16:22:36 crc kubenswrapper[4860]: I1014 16:22:36.095224 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-77df6bdc9c-n997h" podUID="77cdf7e8-dacb-46fd-9393-25d5e36e079e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 16:22:38 crc kubenswrapper[4860]: I1014 16:22:38.113251 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-hpkm4" podUID="1b0c2826-792e-44ca-9bc1-830aefee72d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 16:22:45 crc kubenswrapper[4860]: I1014 16:22:45.111865 4860 generic.go:334] "Generic (PLEG): container finished" podID="1b547cb9-b2c8-444a-b8d3-77e668f953c3" containerID="da93f14a58969d91833ef089cca1896c204c5c1c24f47d39e877ab47755146a3" exitCode=0 Oct 14 16:22:45 crc kubenswrapper[4860]: I1014 16:22:45.111970 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r4bls/must-gather-l94kb" event={"ID":"1b547cb9-b2c8-444a-b8d3-77e668f953c3","Type":"ContainerDied","Data":"da93f14a58969d91833ef089cca1896c204c5c1c24f47d39e877ab47755146a3"} Oct 14 16:22:45 crc kubenswrapper[4860]: I1014 16:22:45.113944 4860 scope.go:117] "RemoveContainer" containerID="da93f14a58969d91833ef089cca1896c204c5c1c24f47d39e877ab47755146a3" Oct 14 16:22:45 crc kubenswrapper[4860]: I1014 16:22:45.821892 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r4bls_must-gather-l94kb_1b547cb9-b2c8-444a-b8d3-77e668f953c3/gather/0.log" Oct 14 16:22:57 crc kubenswrapper[4860]: I1014 16:22:57.785543 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r4bls/must-gather-l94kb"] Oct 14 16:22:57 crc kubenswrapper[4860]: I1014 16:22:57.786504 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-r4bls/must-gather-l94kb" podUID="1b547cb9-b2c8-444a-b8d3-77e668f953c3" containerName="copy" containerID="cri-o://660b073c3b82eeaea8febd92751199953791af2aaea7dfb6ab04b161f96f865d" gracePeriod=2 Oct 14 16:22:57 crc kubenswrapper[4860]: I1014 16:22:57.796507 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r4bls/must-gather-l94kb"] Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.224964 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r4bls_must-gather-l94kb_1b547cb9-b2c8-444a-b8d3-77e668f953c3/copy/0.log" Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.225511 4860 generic.go:334] "Generic (PLEG): container finished" podID="1b547cb9-b2c8-444a-b8d3-77e668f953c3" containerID="660b073c3b82eeaea8febd92751199953791af2aaea7dfb6ab04b161f96f865d" exitCode=143 Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.815943 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r4bls_must-gather-l94kb_1b547cb9-b2c8-444a-b8d3-77e668f953c3/copy/0.log" Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.816730 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.837715 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk69f\" (UniqueName: \"kubernetes.io/projected/1b547cb9-b2c8-444a-b8d3-77e668f953c3-kube-api-access-tk69f\") pod \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.837821 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1b547cb9-b2c8-444a-b8d3-77e668f953c3-must-gather-output\") pod \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\" (UID: \"1b547cb9-b2c8-444a-b8d3-77e668f953c3\") " Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.876632 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b547cb9-b2c8-444a-b8d3-77e668f953c3-kube-api-access-tk69f" (OuterVolumeSpecName: "kube-api-access-tk69f") pod "1b547cb9-b2c8-444a-b8d3-77e668f953c3" (UID: "1b547cb9-b2c8-444a-b8d3-77e668f953c3"). InnerVolumeSpecName "kube-api-access-tk69f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 16:22:58 crc kubenswrapper[4860]: I1014 16:22:58.940001 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk69f\" (UniqueName: \"kubernetes.io/projected/1b547cb9-b2c8-444a-b8d3-77e668f953c3-kube-api-access-tk69f\") on node \"crc\" DevicePath \"\"" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.021077 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b547cb9-b2c8-444a-b8d3-77e668f953c3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1b547cb9-b2c8-444a-b8d3-77e668f953c3" (UID: "1b547cb9-b2c8-444a-b8d3-77e668f953c3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.042109 4860 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1b547cb9-b2c8-444a-b8d3-77e668f953c3-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.120726 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b547cb9-b2c8-444a-b8d3-77e668f953c3" path="/var/lib/kubelet/pods/1b547cb9-b2c8-444a-b8d3-77e668f953c3/volumes" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.235633 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r4bls_must-gather-l94kb_1b547cb9-b2c8-444a-b8d3-77e668f953c3/copy/0.log" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.236324 4860 scope.go:117] "RemoveContainer" containerID="660b073c3b82eeaea8febd92751199953791af2aaea7dfb6ab04b161f96f865d" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.236599 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r4bls/must-gather-l94kb" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.245811 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.245876 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:22:59 crc kubenswrapper[4860]: I1014 16:22:59.267535 4860 scope.go:117] "RemoveContainer" containerID="da93f14a58969d91833ef089cca1896c204c5c1c24f47d39e877ab47755146a3" Oct 14 16:23:20 crc kubenswrapper[4860]: I1014 16:23:20.323777 4860 scope.go:117] "RemoveContainer" containerID="1e7d4db45f3530897644a1c1db41aff3a181a07e2cada8b0eef0779c0c674825" Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.245339 4860 patch_prober.go:28] interesting pod/machine-config-daemon-6ldv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.245959 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.246015 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.246813 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb"} pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.246871 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" containerName="machine-config-daemon" containerID="cri-o://b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" gracePeriod=600 Oct 14 16:23:29 crc kubenswrapper[4860]: E1014 16:23:29.471225 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.529167 4860 generic.go:334] "Generic (PLEG): container finished" podID="6436186e-e1ba-4c37-b8f9-210de837a051" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" exitCode=0 Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.529218 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" event={"ID":"6436186e-e1ba-4c37-b8f9-210de837a051","Type":"ContainerDied","Data":"b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb"} Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.529938 4860 scope.go:117] "RemoveContainer" containerID="20e59eaa6be34b86827808ea8770025074b38ad785ce4c0eccd4f5e13bb7b741" Oct 14 16:23:29 crc kubenswrapper[4860]: I1014 16:23:29.530159 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:23:29 crc kubenswrapper[4860]: E1014 16:23:29.530476 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:23:45 crc kubenswrapper[4860]: I1014 16:23:45.061343 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:23:45 crc kubenswrapper[4860]: E1014 16:23:45.062226 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:23:59 crc kubenswrapper[4860]: I1014 16:23:59.061305 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:23:59 crc kubenswrapper[4860]: E1014 16:23:59.062127 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:24:14 crc kubenswrapper[4860]: I1014 16:24:14.061946 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:24:14 crc kubenswrapper[4860]: E1014 16:24:14.062768 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:24:20 crc kubenswrapper[4860]: I1014 16:24:20.395001 4860 scope.go:117] "RemoveContainer" containerID="5ade317291d80b57b052051b54a4d1e0fd4d7ecd8d946831bccc0efe75b0763b" Oct 14 16:24:20 crc kubenswrapper[4860]: I1014 16:24:20.418503 4860 scope.go:117] "RemoveContainer" containerID="ceb00f0d00f615ed2e5c00f6f4f4e74ababfd17e4dc2a10418bb740081d693f6" Oct 14 16:24:25 crc kubenswrapper[4860]: I1014 16:24:25.061610 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:24:25 crc kubenswrapper[4860]: E1014 16:24:25.062140 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:24:39 crc kubenswrapper[4860]: I1014 16:24:39.075441 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:24:39 crc kubenswrapper[4860]: E1014 16:24:39.076360 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:24:50 crc kubenswrapper[4860]: I1014 16:24:50.063425 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:24:50 crc kubenswrapper[4860]: E1014 16:24:50.064607 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:25:01 crc kubenswrapper[4860]: I1014 16:25:01.062520 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:25:01 crc kubenswrapper[4860]: E1014 16:25:01.063257 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:25:16 crc kubenswrapper[4860]: I1014 16:25:16.061655 4860 scope.go:117] "RemoveContainer" containerID="b4477d9879fc119c4f8fe514cf19098fa6002f3b1da02f47d08c50744d73afbb" Oct 14 16:25:16 crc kubenswrapper[4860]: E1014 16:25:16.062513 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6ldv4_openshift-machine-config-operator(6436186e-e1ba-4c37-b8f9-210de837a051)\"" pod="openshift-machine-config-operator/machine-config-daemon-6ldv4" podUID="6436186e-e1ba-4c37-b8f9-210de837a051" Oct 14 16:25:20 crc kubenswrapper[4860]: I1014 16:25:20.489864 4860 scope.go:117] "RemoveContainer" containerID="a501a5c3ff0357a158ab9dc67ac9c087f4f34e425f8fbdd57fc83e94f392cf98" Oct 14 16:25:20 crc kubenswrapper[4860]: I1014 16:25:20.520973 4860 scope.go:117] "RemoveContainer" containerID="0712a1543941fc148211b87800337cb9c50565b03127bca906c773d8bdf29aa8"